Test Report: Hyper-V_Windows 19265

                    
                      4b25178fc7513411450a4d543cff32ee34a2d14b:2024-07-16:35370
                    
                

Test fail (25/210)

x
+
TestAddons/parallel/Registry (72.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 31.475ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tzs5b" [6a01ed28-a251-4305-b912-3bdc64473548] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0193751s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8t5wx" [236014e2-fc14-4543-9d47-a39a26a84993] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0127537s
addons_test.go:342: (dbg) Run:  kubectl --context addons-933500 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-933500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-933500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.1876171s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 ip: (2.6017582s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0716 17:14:17.465831   13924 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-933500 ip"
2024/07/16 17:14:19 [DEBUG] GET http://172.27.174.219:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable registry --alsologtostderr -v=1: (15.8380609s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-933500 -n addons-933500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-933500 -n addons-933500: (12.7109239s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 logs -n 25: (9.3489323s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-614900              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| delete  | -p download-only-614900              | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| start   | -o=json --download-only              | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-914500              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-914500              | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| start   | -o=json --download-only              | download-only-359900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT |                     |
	|         | -p download-only-359900              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0  |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-359900              | download-only-359900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-614900              | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-914500              | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-359900              | download-only-359900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| start   | --download-only -p                   | binary-mirror-508700 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT |                     |
	|         | binary-mirror-508700                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:64079               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-508700              | binary-mirror-508700 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| addons  | disable dashboard -p                 | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT |                     |
	|         | addons-933500                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT |                     |
	|         | addons-933500                        |                      |                   |         |                     |                     |
	| start   | -p addons-933500 --wait=true         | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:14 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:14 PDT | 16 Jul 24 17:14 PDT |
	|         | -p addons-933500                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:14 PDT | 16 Jul 24 17:14 PDT |
	|         | -p addons-933500                     |                      |                   |         |                     |                     |
	| ip      | addons-933500 ip                     | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:14 PDT | 16 Jul 24 17:14 PDT |
	| addons  | addons-933500 addons disable         | addons-933500        | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:14 PDT | 16 Jul 24 17:14 PDT |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:06:39
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:06:39.611988   14824 out.go:291] Setting OutFile to fd 812 ...
	I0716 17:06:39.612655   14824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:06:39.612655   14824 out.go:304] Setting ErrFile to fd 816...
	I0716 17:06:39.612655   14824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:06:39.637841   14824 out.go:298] Setting JSON to false
	I0716 17:06:39.640895   14824 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16438,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:06:39.640895   14824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:06:39.647322   14824 out.go:177] * [addons-933500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:06:39.651221   14824 notify.go:220] Checking for updates...
	I0716 17:06:39.653189   14824 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:06:39.656212   14824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:06:39.660120   14824 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:06:39.662799   14824 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:06:39.665806   14824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:06:39.668434   14824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:06:44.874434   14824 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:06:44.876888   14824 start.go:297] selected driver: hyperv
	I0716 17:06:44.876888   14824 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:06:44.876888   14824 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:06:44.922319   14824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:06:44.923957   14824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:06:44.923957   14824 cni.go:84] Creating CNI manager for ""
	I0716 17:06:44.923957   14824 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:06:44.923957   14824 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0716 17:06:44.923957   14824 start.go:340] cluster config:
	{Name:addons-933500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-933500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:06:44.923957   14824 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:06:44.929636   14824 out.go:177] * Starting "addons-933500" primary control-plane node in "addons-933500" cluster
	I0716 17:06:44.932574   14824 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:06:44.932574   14824 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:06:44.932574   14824 cache.go:56] Caching tarball of preloaded images
	I0716 17:06:44.933108   14824 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:06:44.933108   14824 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:06:44.933108   14824 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\config.json ...
	I0716 17:06:44.934144   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\config.json: {Name:mkd264d5ee6b0351621b2924c63ad965f799ad1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:06:44.934652   14824 start.go:360] acquireMachinesLock for addons-933500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:06:44.935634   14824 start.go:364] duration metric: took 982.1µs to acquireMachinesLock for "addons-933500"
	I0716 17:06:44.935970   14824 start.go:93] Provisioning new machine with config: &{Name:addons-933500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:addons-933500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:06:44.935970   14824 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:06:44.939557   14824 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0716 17:06:44.940253   14824 start.go:159] libmachine.API.Create for "addons-933500" (driver="hyperv")
	I0716 17:06:44.940253   14824 client.go:168] LocalClient.Create starting
	I0716 17:06:44.940682   14824 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:06:45.039984   14824 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:06:45.301660   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:06:47.385066   14824 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:06:47.385066   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:47.385397   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:06:49.058583   14824 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:06:49.058813   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:49.058899   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:06:50.499280   14824 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:06:50.499280   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:50.500288   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:06:54.106538   14824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:06:54.106538   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:54.111027   14824 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:06:54.554485   14824 main.go:141] libmachine: Creating SSH key...
	I0716 17:06:54.671483   14824 main.go:141] libmachine: Creating VM...
	I0716 17:06:54.671483   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:06:57.376367   14824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:06:57.376905   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:57.376905   14824 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:06:57.377088   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:06:58.984286   14824 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:06:58.984589   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:06:58.984589   14824 main.go:141] libmachine: Creating VHD
	I0716 17:06:58.984589   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:07:02.673965   14824 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3938355-1E00-4B40-9DE5-68E8AD200ADB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:07:02.674210   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:02.674210   14824 main.go:141] libmachine: Writing magic tar header
	I0716 17:07:02.674419   14824 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:07:02.684108   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:07:05.850976   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:05.850976   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:05.851699   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\disk.vhd' -SizeBytes 20000MB
	I0716 17:07:08.827360   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:08.827604   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:08.827604   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-933500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0716 17:07:12.807001   14824 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-933500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:07:12.807001   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:12.808045   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-933500 -DynamicMemoryEnabled $false
	I0716 17:07:14.958114   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:14.958114   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:14.958114   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-933500 -Count 2
	I0716 17:07:17.031331   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:17.031481   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:17.031481   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-933500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\boot2docker.iso'
	I0716 17:07:19.475519   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:19.475519   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:19.475519   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-933500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\disk.vhd'
	I0716 17:07:22.036866   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:22.036866   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:22.036866   14824 main.go:141] libmachine: Starting VM...
	I0716 17:07:22.037726   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-933500
	I0716 17:07:25.685681   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:25.685681   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:25.685681   14824 main.go:141] libmachine: Waiting for host to start...
	I0716 17:07:25.686661   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:27.977503   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:27.977503   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:27.978214   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:30.515751   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:30.515751   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:31.518696   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:33.728314   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:33.728378   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:33.728378   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:36.280190   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:36.280190   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:37.289174   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:39.448659   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:39.449451   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:39.449451   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:41.955219   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:41.955219   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:42.956237   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:45.062388   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:45.062388   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:45.063060   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:47.486163   14824 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:07:47.486397   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:48.487235   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:50.628444   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:50.628444   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:50.628791   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:53.141760   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:07:53.141760   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:53.142557   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:55.226674   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:55.226674   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:55.227515   14824 machine.go:94] provisionDockerMachine start ...
	I0716 17:07:55.227675   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:07:57.287061   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:07:57.287061   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:57.287328   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:07:59.690836   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:07:59.691163   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:07:59.697151   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:07:59.708225   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:07:59.708225   14824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:07:59.836828   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:07:59.836934   14824 buildroot.go:166] provisioning hostname "addons-933500"
	I0716 17:07:59.837024   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:01.866432   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:01.866862   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:01.867155   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:04.305083   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:04.305083   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:04.311488   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:04.312321   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:04.312321   14824 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-933500 && echo "addons-933500" | sudo tee /etc/hostname
	I0716 17:08:04.461564   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-933500
	
	I0716 17:08:04.461795   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:06.539761   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:06.539761   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:06.540285   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:09.011382   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:09.011382   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:09.017769   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:09.018520   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:09.018520   14824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-933500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-933500/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-933500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:08:09.178245   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:08:09.178392   14824 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:08:09.178526   14824 buildroot.go:174] setting up certificates
	I0716 17:08:09.178526   14824 provision.go:84] configureAuth start
	I0716 17:08:09.178689   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:11.217989   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:11.217989   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:11.219046   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:13.669750   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:13.669750   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:13.669881   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:15.690931   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:15.691970   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:15.692058   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:18.147118   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:18.147118   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:18.147444   14824 provision.go:143] copyHostCerts
	I0716 17:08:18.148326   14824 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:08:18.150489   14824 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:08:18.152310   14824 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:08:18.153550   14824 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-933500 san=[127.0.0.1 172.27.174.219 addons-933500 localhost minikube]
	I0716 17:08:18.357903   14824 provision.go:177] copyRemoteCerts
	I0716 17:08:18.370901   14824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:08:18.370901   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:20.480843   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:20.481030   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:20.481030   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:22.937402   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:22.937402   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:22.938227   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:08:23.046790   14824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6758221s)
	I0716 17:08:23.047495   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:08:23.099563   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:08:23.149231   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:08:23.196241   14824 provision.go:87] duration metric: took 14.0166661s to configureAuth
	I0716 17:08:23.196241   14824 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:08:23.196927   14824 config.go:182] Loaded profile config "addons-933500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:08:23.197017   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:25.217354   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:25.218321   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:25.218321   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:27.663200   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:27.663200   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:27.669486   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:27.670252   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:27.670252   14824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:08:27.796042   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:08:27.796124   14824 buildroot.go:70] root file system type: tmpfs
	I0716 17:08:27.796227   14824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:08:27.796227   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:29.844285   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:29.844682   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:29.844744   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:32.271991   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:32.272882   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:32.278154   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:32.278807   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:32.278895   14824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:08:32.441677   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:08:32.441805   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:34.482154   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:34.482154   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:34.482154   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:36.856189   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:36.856189   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:36.862658   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:36.863071   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:36.863071   14824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:08:39.063662   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:08:39.063662   14824 machine.go:97] duration metric: took 43.8359672s to provisionDockerMachine
	I0716 17:08:39.063662   14824 client.go:171] duration metric: took 1m54.1229411s to LocalClient.Create
	I0716 17:08:39.063802   14824 start.go:167] duration metric: took 1m54.1230812s to libmachine.API.Create "addons-933500"
	I0716 17:08:39.063979   14824 start.go:293] postStartSetup for "addons-933500" (driver="hyperv")
	I0716 17:08:39.064023   14824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:08:39.078235   14824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:08:39.078235   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:41.162332   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:41.162332   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:41.162616   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:43.531383   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:43.532367   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:43.532628   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:08:43.643544   14824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5652902s)
	I0716 17:08:43.660342   14824 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:08:43.668463   14824 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:08:43.668463   14824 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:08:43.669065   14824 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:08:43.669350   14824 start.go:296] duration metric: took 4.6053073s for postStartSetup
	I0716 17:08:43.671432   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:45.686191   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:45.686191   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:45.686307   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:48.070264   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:48.070264   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:48.071088   14824 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\config.json ...
	I0716 17:08:48.073961   14824 start.go:128] duration metric: took 2m3.1373967s to createHost
	I0716 17:08:48.074060   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:50.102397   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:50.102397   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:50.103204   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:52.494299   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:52.495173   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:52.500462   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:52.501181   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:52.501181   14824 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:08:52.631538   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721174932.637146097
	
	I0716 17:08:52.631538   14824 fix.go:216] guest clock: 1721174932.637146097
	I0716 17:08:52.631538   14824 fix.go:229] Guest: 2024-07-16 17:08:52.637146097 -0700 PDT Remote: 2024-07-16 17:08:48.0739617 -0700 PDT m=+128.555660101 (delta=4.563184397s)
	I0716 17:08:52.631719   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:54.654889   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:54.655945   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:54.656013   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:08:57.059658   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:08:57.059658   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:57.068112   14824 main.go:141] libmachine: Using SSH client type: native
	I0716 17:08:57.068951   14824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.174.219 22 <nil> <nil>}
	I0716 17:08:57.068951   14824 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721174932
	I0716 17:08:57.203477   14824 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:08:52 UTC 2024
	
	I0716 17:08:57.203533   14824 fix.go:236] clock set: Wed Jul 17 00:08:52 UTC 2024
	 (err=<nil>)
	I0716 17:08:57.203533   14824 start.go:83] releasing machines lock for "addons-933500", held for 2m12.2673566s
	I0716 17:08:57.203723   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:08:59.249049   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:08:59.249938   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:08:59.249938   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:09:01.758058   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:09:01.758058   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:09:01.762673   14824 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:09:01.762876   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:01.774226   14824 ssh_runner.go:195] Run: cat /version.json
	I0716 17:09:01.774226   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:03.938115   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:09:03.938115   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:09:03.938741   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:09:03.947725   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:09:03.947725   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:09:03.947725   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:09:06.570833   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:09:06.571294   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:09:06.571449   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:09:06.593118   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:09:06.593118   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:09:06.593913   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:09:06.657840   14824 ssh_runner.go:235] Completed: cat /version.json: (4.8835941s)
	I0716 17:09:06.672155   14824 ssh_runner.go:195] Run: systemctl --version
	I0716 17:09:06.677926   14824 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9145951s)
	W0716 17:09:06.678016   14824 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:09:06.696937   14824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:09:06.706309   14824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:09:06.720800   14824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:09:06.751814   14824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:09:06.751814   14824 start.go:495] detecting cgroup driver to use...
	I0716 17:09:06.751814   14824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:09:06.799509   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0716 17:09:06.831552   14824 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:09:06.831552   14824 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:09:06.836038   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:09:06.858649   14824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:09:06.871025   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:09:06.904861   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:09:06.940113   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:09:06.972940   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:09:07.004538   14824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:09:07.038393   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:09:07.077057   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:09:07.116604   14824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:09:07.147606   14824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:09:07.179351   14824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:09:07.209885   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:07.410196   14824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:09:07.440900   14824 start.go:495] detecting cgroup driver to use...
	I0716 17:09:07.454660   14824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:09:07.493723   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:09:07.525033   14824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:09:07.574077   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:09:07.608995   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:09:07.644234   14824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:09:07.705395   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:09:07.726393   14824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:09:07.775006   14824 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:09:07.795901   14824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:09:07.813827   14824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:09:07.856674   14824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:09:08.060436   14824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:09:08.254762   14824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:09:08.255134   14824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:09:08.301330   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:08.492362   14824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:09:11.078414   14824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5860409s)
	I0716 17:09:11.092254   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:09:11.133774   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:09:11.167816   14824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:09:11.362357   14824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:09:11.563564   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:11.745928   14824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:09:11.787338   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:09:11.820539   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:12.003008   14824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:09:12.108101   14824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:09:12.121428   14824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:09:12.132132   14824 start.go:563] Will wait 60s for crictl version
	I0716 17:09:12.144851   14824 ssh_runner.go:195] Run: which crictl
	I0716 17:09:12.163297   14824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:09:12.217785   14824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:09:12.228878   14824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:09:12.278508   14824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:09:12.315581   14824 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:09:12.315865   14824 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:09:12.319955   14824 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:09:12.319955   14824 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:09:12.319955   14824 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:09:12.319955   14824 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:09:12.322659   14824 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:09:12.322659   14824 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:09:12.335678   14824 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:09:12.341905   14824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:09:12.364814   14824 kubeadm.go:883] updating cluster {Name:addons-933500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:addons-933500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.174.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:09:12.365093   14824 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:09:12.374759   14824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:09:12.399197   14824 docker.go:685] Got preloaded images: 
	I0716 17:09:12.399197   14824 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:09:12.411914   14824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:09:12.442715   14824 ssh_runner.go:195] Run: which lz4
	I0716 17:09:12.460127   14824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:09:12.466361   14824 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:09:12.466361   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:09:14.687242   14824 docker.go:649] duration metric: took 2.2382031s to copy over tarball
	I0716 17:09:14.698584   14824 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:09:20.215650   14824 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.5170431s)
	I0716 17:09:20.215650   14824 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:09:20.281599   14824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:09:20.298572   14824 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:09:20.340055   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:20.514550   14824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:09:25.846832   14824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.3322603s)
	I0716 17:09:25.855815   14824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:09:25.881617   14824 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:09:25.881617   14824 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:09:25.881617   14824 kubeadm.go:934] updating node { 172.27.174.219 8443 v1.30.2 docker true true} ...
	I0716 17:09:25.882635   14824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-933500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.174.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-933500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:09:25.890599   14824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:09:25.924607   14824 cni.go:84] Creating CNI manager for ""
	I0716 17:09:25.924607   14824 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:09:25.924607   14824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:09:25.924607   14824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.174.219 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-933500 NodeName:addons-933500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.174.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.174.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:09:25.924607   14824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.174.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-933500"
	  kubeletExtraArgs:
	    node-ip: 172.27.174.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.174.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:09:25.937614   14824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:09:25.970809   14824 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:09:25.980426   14824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 17:09:26.001965   14824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0716 17:09:26.033398   14824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:09:26.074790   14824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 17:09:26.118280   14824 ssh_runner.go:195] Run: grep 172.27.174.219	control-plane.minikube.internal$ /etc/hosts
	I0716 17:09:26.124973   14824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.174.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:09:26.163336   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:26.344894   14824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:09:26.375911   14824 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500 for IP: 172.27.174.219
	I0716 17:09:26.375911   14824 certs.go:194] generating shared ca certs ...
	I0716 17:09:26.376013   14824 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:26.376578   14824 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:09:27.383959   14824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0716 17:09:27.383959   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.385913   14824 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0716 17:09:27.385913   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.387933   14824 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:09:27.568829   14824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0716 17:09:27.568829   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.569843   14824 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0716 17:09:27.569843   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.570751   14824 certs.go:256] generating profile certs ...
	I0716 17:09:27.571827   14824 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.key
	I0716 17:09:27.571827   14824 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt with IP's: []
	I0716 17:09:27.674496   14824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt ...
	I0716 17:09:27.674496   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: {Name:mk85c4ad8c4faadd78199247bc6eca17c32b4b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.676262   14824 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.key ...
	I0716 17:09:27.676422   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.key: {Name:mkd396631d70608deb83d5346ecac37f63e0bf5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.676593   14824 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key.2dc7dd10
	I0716 17:09:27.677599   14824 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt.2dc7dd10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.174.219]
	I0716 17:09:27.901641   14824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt.2dc7dd10 ...
	I0716 17:09:27.901641   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt.2dc7dd10: {Name:mk713d463a1252531bd6f86c961c5e45ab5c9b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.902669   14824 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key.2dc7dd10 ...
	I0716 17:09:27.902669   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key.2dc7dd10: {Name:mk54adc50e687d649dad7d4665425995bec3d4b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:27.903664   14824 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt.2dc7dd10 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt
	I0716 17:09:27.915329   14824 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key.2dc7dd10 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key
	I0716 17:09:27.916330   14824 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.key
	I0716 17:09:27.916330   14824 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.crt with IP's: []
	I0716 17:09:28.214118   14824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.crt ...
	I0716 17:09:28.214118   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.crt: {Name:mk0cce6e20941a3ea7d9e54e6e222ac2a9378cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:28.215210   14824 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.key ...
	I0716 17:09:28.216220   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.key: {Name:mkbc559d951c34dfbf9a8bc10c3708adb4f12a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:28.228525   14824 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:09:28.228967   14824 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:09:28.229859   14824 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:09:28.230434   14824 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:09:28.234955   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:09:28.283766   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:09:28.330138   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:09:28.370330   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:09:28.410923   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0716 17:09:28.458633   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0716 17:09:28.503440   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:09:28.550954   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:09:28.586989   14824 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:09:28.619955   14824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:09:28.662198   14824 ssh_runner.go:195] Run: openssl version
	I0716 17:09:28.681008   14824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:09:28.709572   14824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:09:28.716582   14824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:09:28.727972   14824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:09:28.748677   14824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:09:28.778642   14824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:09:28.785499   14824 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:09:28.785955   14824 kubeadm.go:392] StartCluster: {Name:addons-933500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:addons-933500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.174.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:09:28.795147   14824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:09:28.829838   14824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:09:28.856834   14824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:09:28.885644   14824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:09:28.903131   14824 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:09:28.903131   14824 kubeadm.go:157] found existing configuration files:
	
	I0716 17:09:28.915808   14824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:09:28.931521   14824 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:09:28.943543   14824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:09:28.970506   14824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:09:28.987154   14824 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:09:28.999507   14824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:09:29.026454   14824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:09:29.042125   14824 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:09:29.057604   14824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:09:29.088977   14824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:09:29.106619   14824 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:09:29.117623   14824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:09:29.134318   14824 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:09:29.368714   14824 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:09:41.734991   14824 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:09:41.735279   14824 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:09:41.735279   14824 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:09:41.735279   14824 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:09:41.735900   14824 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:09:41.735900   14824 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:09:41.737661   14824 out.go:204]   - Generating certificates and keys ...
	I0716 17:09:41.737661   14824 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:09:41.737661   14824 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:09:41.737661   14824 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:09:41.737661   14824 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:09:41.739480   14824 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:09:41.739615   14824 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:09:41.739773   14824 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:09:41.740081   14824 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-933500 localhost] and IPs [172.27.174.219 127.0.0.1 ::1]
	I0716 17:09:41.740218   14824 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:09:41.740218   14824 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-933500 localhost] and IPs [172.27.174.219 127.0.0.1 ::1]
	I0716 17:09:41.740218   14824 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:09:41.740218   14824 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:09:41.740218   14824 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:09:41.740218   14824 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:09:41.740218   14824 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:09:41.740218   14824 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:09:41.747540   14824 out.go:204]   - Booting up control plane ...
	I0716 17:09:41.747665   14824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:09:41.747665   14824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:09:41.748213   14824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:09:41.748577   14824 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:09:41.748659   14824 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:09:41.748659   14824 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:09:41.749370   14824 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:09:41.749557   14824 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:09:41.749644   14824 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.944276ms
	I0716 17:09:41.749644   14824 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:09:41.749644   14824 kubeadm.go:310] [api-check] The API server is healthy after 6.501827714s
	I0716 17:09:41.750184   14824 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:09:41.750442   14824 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:09:41.750442   14824 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:09:41.750966   14824 kubeadm.go:310] [mark-control-plane] Marking the node addons-933500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:09:41.751097   14824 kubeadm.go:310] [bootstrap-token] Using token: a2mrro.26bu50yhkcv463l6
	I0716 17:09:41.754649   14824 out.go:204]   - Configuring RBAC rules ...
	I0716 17:09:41.755174   14824 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:09:41.755341   14824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:09:41.755675   14824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:09:41.755872   14824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:09:41.756073   14824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:09:41.756362   14824 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:09:41.756362   14824 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:09:41.756362   14824 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:09:41.756917   14824 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:09:41.756955   14824 kubeadm.go:310] 
	I0716 17:09:41.756955   14824 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:09:41.756955   14824 kubeadm.go:310] 
	I0716 17:09:41.756955   14824 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:09:41.756955   14824 kubeadm.go:310] 
	I0716 17:09:41.756955   14824 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:09:41.757714   14824 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:09:41.757809   14824 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:09:41.757809   14824 kubeadm.go:310] 
	I0716 17:09:41.757934   14824 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:09:41.757934   14824 kubeadm.go:310] 
	I0716 17:09:41.757934   14824 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:09:41.757934   14824 kubeadm.go:310] 
	I0716 17:09:41.757934   14824 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:09:41.758461   14824 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:09:41.758611   14824 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:09:41.758611   14824 kubeadm.go:310] 
	I0716 17:09:41.758611   14824 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:09:41.758611   14824 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:09:41.758611   14824 kubeadm.go:310] 
	I0716 17:09:41.759134   14824 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2mrro.26bu50yhkcv463l6 \
	I0716 17:09:41.759258   14824 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:09:41.759258   14824 kubeadm.go:310] 	--control-plane 
	I0716 17:09:41.759258   14824 kubeadm.go:310] 
	I0716 17:09:41.759476   14824 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:09:41.759476   14824 kubeadm.go:310] 
	I0716 17:09:41.759634   14824 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2mrro.26bu50yhkcv463l6 \
	I0716 17:09:41.759891   14824 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:09:41.759891   14824 cni.go:84] Creating CNI manager for ""
	I0716 17:09:41.759891   14824 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:09:41.762093   14824 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0716 17:09:41.776059   14824 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0716 17:09:41.795075   14824 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0716 17:09:41.835800   14824 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:09:41.849472   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:41.849472   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-933500 minikube.k8s.io/updated_at=2024_07_16T17_09_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-933500 minikube.k8s.io/primary=true
	I0716 17:09:41.855113   14824 ops.go:34] apiserver oom_adj: -16
	I0716 17:09:42.015070   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:42.529289   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:43.017644   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:43.522851   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:44.026924   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:44.533086   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:45.020048   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:45.518331   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:46.022443   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:46.524268   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:47.025714   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:47.536052   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:48.024201   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:48.521000   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:49.026432   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:49.519688   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:50.018547   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:50.524008   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:51.025763   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:51.532008   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:52.019003   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:52.521357   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:53.021771   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:53.524718   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:54.023641   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:54.529894   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:55.031295   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:55.524506   14824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:09:55.661572   14824 kubeadm.go:1113] duration metric: took 13.8257155s to wait for elevateKubeSystemPrivileges
	I0716 17:09:55.661718   14824 kubeadm.go:394] duration metric: took 26.8757174s to StartCluster
	I0716 17:09:55.661718   14824 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:55.661945   14824 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:09:55.663357   14824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:09:55.665637   14824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:09:55.665637   14824 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.174.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:09:55.665637   14824 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0716 17:09:55.666228   14824 addons.go:69] Setting yakd=true in profile "addons-933500"
	I0716 17:09:55.666313   14824 addons.go:69] Setting inspektor-gadget=true in profile "addons-933500"
	I0716 17:09:55.666313   14824 addons.go:69] Setting metrics-server=true in profile "addons-933500"
	I0716 17:09:55.666313   14824 addons.go:234] Setting addon metrics-server=true in "addons-933500"
	I0716 17:09:55.666313   14824 addons.go:234] Setting addon inspektor-gadget=true in "addons-933500"
	I0716 17:09:55.666313   14824 addons.go:69] Setting storage-provisioner=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting default-storageclass=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting volumesnapshots=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:234] Setting addon storage-provisioner=true in "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting ingress-dns=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting helm-tiller=true in profile "addons-933500"
	I0716 17:09:55.666610   14824 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-933500"
	I0716 17:09:55.666709   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666817   14824 addons.go:69] Setting registry=true in profile "addons-933500"
	I0716 17:09:55.666877   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666877   14824 addons.go:234] Setting addon registry=true in "addons-933500"
	I0716 17:09:55.666923   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666479   14824 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting volcano=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting cloud-spanner=true in profile "addons-933500"
	I0716 17:09:55.667383   14824 addons.go:234] Setting addon cloud-spanner=true in "addons-933500"
	I0716 17:09:55.667383   14824 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-933500"
	I0716 17:09:55.667383   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.667534   14824 addons.go:234] Setting addon volcano=true in "addons-933500"
	I0716 17:09:55.667563   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666479   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666479   14824 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-933500"
	I0716 17:09:55.667971   14824 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting gcp-auth=true in profile "addons-933500"
	I0716 17:09:55.666479   14824 addons.go:69] Setting ingress=true in profile "addons-933500"
	I0716 17:09:55.670090   14824 mustload.go:65] Loading cluster: addons-933500
	I0716 17:09:55.670280   14824 out.go:177] * Verifying Kubernetes components...
	I0716 17:09:55.670090   14824 addons.go:234] Setting addon ingress=true in "addons-933500"
	I0716 17:09:55.666313   14824 config.go:182] Loaded profile config "addons-933500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:09:55.666313   14824 addons.go:234] Setting addon yakd=true in "addons-933500"
	I0716 17:09:55.670681   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.670820   14824 config.go:182] Loaded profile config "addons-933500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:09:55.666479   14824 addons.go:234] Setting addon volumesnapshots=true in "addons-933500"
	I0716 17:09:55.666479   14824 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-933500"
	I0716 17:09:55.666610   14824 addons.go:234] Setting addon ingress-dns=true in "addons-933500"
	I0716 17:09:55.666610   14824 addons.go:234] Setting addon helm-tiller=true in "addons-933500"
	I0716 17:09:55.667684   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.666479   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.671027   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.670820   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.671260   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.671352   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.671477   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.671611   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.670820   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.671611   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.671352   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:09:55.670820   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.670820   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.673221   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.674233   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.675219   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.675219   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.675219   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.676222   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.676222   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.676222   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.676222   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:09:55.702877   14824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:09:56.428342   14824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:09:57.228356   14824 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.5254733s)
	I0716 17:09:57.250982   14824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:09:58.948465   14824 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.5201127s)
	I0716 17:09:58.948465   14824 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:09:58.954754   14824 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7037656s)
	I0716 17:09:58.957969   14824 node_ready.go:35] waiting up to 6m0s for node "addons-933500" to be "Ready" ...
	I0716 17:09:59.016229   14824 node_ready.go:49] node "addons-933500" has status "Ready":"True"
	I0716 17:09:59.016229   14824 node_ready.go:38] duration metric: took 58.1968ms for node "addons-933500" to be "Ready" ...
	I0716 17:09:59.016229   14824 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:09:59.372646   14824 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace to be "Ready" ...
	I0716 17:09:59.743833   14824 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-933500" context rescaled to 1 replicas
	I0716 17:10:01.576561   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:02.330219   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.330219   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.335219   14824 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0716 17:10:02.338236   14824 out.go:177]   - Using image docker.io/registry:2.8.3
	I0716 17:10:02.343214   14824 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0716 17:10:02.343214   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0716 17:10:02.343214   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.526810   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.526810   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.527800   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.527800   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.528798   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.530788   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.530788   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0716 17:10:02.532785   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.533796   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.533796   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:10:02.533796   14824 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0716 17:10:02.535796   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0716 17:10:02.536789   14824 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0716 17:10:02.536789   14824 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0716 17:10:02.536789   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.538848   14824 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0716 17:10:02.538848   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0716 17:10:02.538848   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.542799   14824 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0716 17:10:02.542799   14824 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0716 17:10:02.543420   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.791696   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.791696   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.795304   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.795304   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.800534   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0716 17:10:02.800953   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.800953   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.801308   14824 addons.go:234] Setting addon default-storageclass=true in "addons-933500"
	I0716 17:10:02.823037   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:10:02.824307   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.829299   14824 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:10:02.850311   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.852326   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.850311   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0716 17:10:02.861611   14824 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:10:02.865300   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.866329   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.871437   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:10:02.871437   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.873310   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.873310   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.887313   14824 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0716 17:10:02.898311   14824 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0716 17:10:02.945860   14824 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0716 17:10:02.948867   14824 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0716 17:10:02.948867   14824 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0716 17:10:02.948867   14824 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0716 17:10:02.948867   14824 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0716 17:10:02.959494   14824 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0716 17:10:02.959857   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0716 17:10:02.962931   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0716 17:10:02.964904   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.964904   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.965369   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.965369   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.965369   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:02.965369   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:02.984519   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:02.987858   14824 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0716 17:10:02.995753   14824 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0716 17:10:02.999954   14824 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0716 17:10:03.003994   14824 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0716 17:10:03.003994   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0716 17:10:03.003994   14824 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0716 17:10:03.003994   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:03.011747   14824 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0716 17:10:03.011747   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0716 17:10:03.011747   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:03.018828   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0716 17:10:03.022751   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0716 17:10:03.068727   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0716 17:10:03.131876   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0716 17:10:03.137876   14824 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0716 17:10:03.142032   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0716 17:10:03.142032   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0716 17:10:03.142032   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:03.694685   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:04.020891   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:04.020891   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:04.104904   14824 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0716 17:10:04.299512   14824 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0716 17:10:04.616515   14824 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0716 17:10:04.700517   14824 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0716 17:10:04.700517   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0716 17:10:04.700517   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:05.120799   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:05.120799   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:05.123538   14824 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-933500"
	I0716 17:10:05.123538   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:10:05.125523   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:05.228535   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:05.228535   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:05.265522   14824 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0716 17:10:05.300521   14824 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0716 17:10:05.300521   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0716 17:10:05.309855   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:05.924580   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:08.365651   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:08.828644   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.828644   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.828644   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:08.874001   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.874001   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.874001   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:08.875359   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.875359   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.877060   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:08.897561   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.897561   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.897695   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:08.944337   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.944337   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.944337   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:08.970268   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:08.970268   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:08.970268   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:09.047488   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.047488   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.047488   14824 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:10:09.047488   14824 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:10:09.047488   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:09.377281   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.377523   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.377650   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:09.626462   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.626903   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.626903   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:09.717168   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.717168   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.717168   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:09.793264   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.793264   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.793264   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:09.817327   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:09.817327   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:09.818330   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:10.316988   14824 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0716 17:10:10.316988   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:10.712410   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:12.919286   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:12.920287   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:12.920287   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:12.986532   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:13.256540   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:13.256540   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:13.335539   14824 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0716 17:10:13.399538   14824 out.go:177]   - Using image docker.io/busybox:stable
	I0716 17:10:13.435538   14824 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0716 17:10:13.435538   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0716 17:10:13.436048   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:14.283226   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:14.283226   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:14.283226   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:15.399207   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:15.460210   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:15.460210   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:15.460210   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:16.246933   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.247033   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.247277   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.377304   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.377304   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.377304   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.456149   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.457590   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.457884   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.599526   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.599526   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.599748   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.629413   14824 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0716 17:10:16.629478   14824 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0716 17:10:16.681536   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.681536   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.681536   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.745579   14824 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0716 17:10:16.745579   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0716 17:10:16.786438   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:16.786438   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.786766   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:16.863589   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:16.863589   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:16.863589   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:16.911585   14824 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0716 17:10:16.911585   14824 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0716 17:10:16.924573   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0716 17:10:16.989671   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:10:17.009681   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:17.009681   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:17.010683   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:17.056700   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0716 17:10:17.102922   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:17.102995   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:17.103181   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:17.121974   14824 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0716 17:10:17.122090   14824 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0716 17:10:17.145423   14824 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0716 17:10:17.145554   14824 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0716 17:10:17.183337   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:17.183337   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:17.183791   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:17.282833   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:17.283038   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:17.283271   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:17.319165   14824 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0716 17:10:17.319165   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0716 17:10:17.368780   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:17.369770   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:17.369770   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:17.399660   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:17.424359   14824 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0716 17:10:17.424491   14824 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0716 17:10:17.482211   14824 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0716 17:10:17.482211   14824 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0716 17:10:17.696069   14824 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0716 17:10:17.696069   14824 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0716 17:10:17.732066   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0716 17:10:17.733064   14824 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0716 17:10:17.733064   14824 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0716 17:10:17.745074   14824 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0716 17:10:17.745074   14824 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0716 17:10:17.745074   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0716 17:10:17.796340   14824 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0716 17:10:17.796340   14824 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0716 17:10:17.909936   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0716 17:10:17.909936   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0716 17:10:17.951729   14824 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0716 17:10:17.952573   14824 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0716 17:10:17.970105   14824 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0716 17:10:17.970105   14824 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0716 17:10:18.015108   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:18.015108   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:18.015108   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:18.026111   14824 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0716 17:10:18.026111   14824 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0716 17:10:18.397286   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:18.397286   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:18.398282   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:18.422313   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0716 17:10:18.442278   14824 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0716 17:10:18.442278   14824 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0716 17:10:18.516291   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0716 17:10:18.516291   14824 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0716 17:10:18.523274   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0716 17:10:18.605376   14824 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0716 17:10:18.605454   14824 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0716 17:10:18.660876   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0716 17:10:18.661011   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0716 17:10:18.735099   14824 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0716 17:10:18.735168   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0716 17:10:18.804996   14824 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0716 17:10:18.805077   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0716 17:10:18.870435   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0716 17:10:18.870435   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0716 17:10:18.895499   14824 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0716 17:10:18.895657   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0716 17:10:18.903087   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:18.903263   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:18.903498   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:18.992398   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0716 17:10:19.035892   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0716 17:10:19.073231   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0716 17:10:19.148165   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0716 17:10:19.148237   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0716 17:10:19.190681   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0716 17:10:19.385425   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:19.385591   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:19.385724   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:19.408595   14824 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0716 17:10:19.408680   14824 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0716 17:10:19.679034   14824 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0716 17:10:19.679034   14824 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0716 17:10:19.921520   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:19.948976   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:19.949077   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:19.949513   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:20.072331   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0716 17:10:20.072399   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0716 17:10:20.092473   14824 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0716 17:10:20.092473   14824 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0716 17:10:20.410166   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:10:20.661323   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0716 17:10:20.661323   14824 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0716 17:10:20.669220   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0716 17:10:20.684034   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.7594456s)
	I0716 17:10:20.684138   14824 addons.go:475] Verifying addon registry=true in "addons-933500"
	I0716 17:10:20.690359   14824 out.go:177] * Verifying registry addon...
	I0716 17:10:20.694414   14824 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0716 17:10:20.724947   14824 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0716 17:10:20.724947   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:20.923209   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:20.923376   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:20.923817   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:21.187898   14824 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0716 17:10:21.196923   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0716 17:10:21.196977   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0716 17:10:21.218773   14824 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0716 17:10:21.218836   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:21.479322   14824 addons.go:234] Setting addon gcp-auth=true in "addons-933500"
	I0716 17:10:21.479322   14824 host.go:66] Checking if "addons-933500" exists ...
	I0716 17:10:21.481567   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:21.585067   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0716 17:10:21.585218   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0716 17:10:21.590885   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.5341657s)
	I0716 17:10:21.590885   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6011948s)
	I0716 17:10:21.590885   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.8588032s)
	I0716 17:10:21.726823   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:21.974266   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:22.231474   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:22.457447   14824 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0716 17:10:22.457602   14824 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0716 17:10:22.579443   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0716 17:10:22.727696   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:23.126408   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0716 17:10:23.206441   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:23.697543   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:23.697630   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:23.711214   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:23.711214   14824 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0716 17:10:23.711214   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-933500 ).state
	I0716 17:10:24.203588   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:24.397399   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:24.710234   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:25.224320   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:25.712258   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:26.139450   14824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:10:26.139450   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:26.140284   14824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-933500 ).networkadapters[0]).ipaddresses[0]
	I0716 17:10:26.212749   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:26.442521   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:26.738360   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:27.215349   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:27.765701   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:28.256186   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:28.704408   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:28.918342   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:28.985682   14824 main.go:141] libmachine: [stdout =====>] : 172.27.174.219
	
	I0716 17:10:28.986690   14824 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:10:28.986865   14824 sshutil.go:53] new ssh client: &{IP:172.27.174.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-933500\id_rsa Username:docker}
	I0716 17:10:29.266756   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:29.762808   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:30.207703   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:30.715055   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:31.210653   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:31.398430   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:31.711719   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:32.228622   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:32.796474   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:33.336007   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:33.420485   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:33.778394   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:34.259939   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:34.718433   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:34.868922   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (17.123778s)
	I0716 17:10:34.869046   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (16.4466659s)
	I0716 17:10:34.869184   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (16.3457049s)
	I0716 17:10:34.869240   14824 addons.go:475] Verifying addon metrics-server=true in "addons-933500"
	I0716 17:10:34.869354   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (15.8768909s)
	W0716 17:10:34.869354   14824 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0716 17:10:34.869529   14824 retry.go:31] will retry after 177.606772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0716 17:10:34.869529   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (15.8335721s)
	I0716 17:10:34.869599   14824 addons.go:475] Verifying addon ingress=true in "addons-933500"
	I0716 17:10:34.869671   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (15.7963747s)
	I0716 17:10:34.869875   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.4595601s)
	I0716 17:10:34.869991   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (14.2007125s)
	I0716 17:10:34.869785   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (15.6790393s)
	I0716 17:10:34.870222   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.2907283s)
	I0716 17:10:34.879858   14824 out.go:177] * Verifying ingress addon...
	I0716 17:10:34.881859   14824 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-933500 service yakd-dashboard -n yakd-dashboard
	
	I0716 17:10:34.886864   14824 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0716 17:10:34.990322   14824 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0716 17:10:34.990322   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:35.073326   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0716 17:10:35.099345   14824 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0716 17:10:35.317695   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:35.472943   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:35.534666   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:35.752356   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:35.903349   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:36.235674   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:36.475951   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:36.742094   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:36.777674   14824 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (13.0664064s)
	I0716 17:10:36.779698   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.653234s)
	I0716 17:10:36.779698   14824 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-933500"
	I0716 17:10:36.780680   14824 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0716 17:10:36.785676   14824 out.go:177] * Verifying csi-hostpath-driver addon...
	I0716 17:10:36.789682   14824 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0716 17:10:36.791681   14824 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0716 17:10:36.793681   14824 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0716 17:10:36.793681   14824 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0716 17:10:36.838595   14824 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0716 17:10:36.838692   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:36.898805   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:36.917413   14824 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0716 17:10:36.917488   14824 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0716 17:10:37.041658   14824 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0716 17:10:37.041781   14824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0716 17:10:37.157121   14824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0716 17:10:37.378078   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:37.379265   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:37.405282   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:37.712775   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:37.775767   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.7024293s)
	I0716 17:10:37.809666   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:37.889298   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:37.894686   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:38.207667   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:38.332631   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:38.388739   14824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.2316125s)
	I0716 17:10:38.395283   14824 addons.go:475] Verifying addon gcp-auth=true in "addons-933500"
	I0716 17:10:38.398272   14824 out.go:177] * Verifying gcp-auth addon...
	I0716 17:10:38.403265   14824 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0716 17:10:38.411508   14824 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0716 17:10:38.422998   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:38.717548   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:38.826253   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:38.896702   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:39.218162   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:39.316651   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:39.397971   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:39.748096   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:39.825824   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:39.896394   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:39.904255   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:40.217531   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:40.329727   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:40.417445   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:40.703749   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:40.801345   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:40.894933   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:41.213071   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:41.316030   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:41.392882   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:41.703370   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:41.840877   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:41.893360   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:42.208164   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:42.307585   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:42.384719   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:42.398656   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:42.701271   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:42.813906   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:42.893621   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:43.206444   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:43.302874   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:43.394297   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:43.713585   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:43.809502   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:43.893526   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:44.207414   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:44.301547   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:44.393449   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:44.710984   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:44.810470   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:44.886988   14824 pod_ready.go:102] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"False"
	I0716 17:10:44.891455   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:45.217595   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:46.082550   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:46.082550   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:46.083322   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:46.104088   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:46.110465   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:46.124059   14824 pod_ready.go:92] pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.124111   14824 pod_ready.go:81] duration metric: took 46.7512733s for pod "coredns-7db6d8ff4d-g8bxj" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.124164   14824 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t92qz" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.136787   14824 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-t92qz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-t92qz" not found
	I0716 17:10:46.136830   14824 pod_ready.go:81] duration metric: took 12.666ms for pod "coredns-7db6d8ff4d-t92qz" in "kube-system" namespace to be "Ready" ...
	E0716 17:10:46.136830   14824 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-t92qz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-t92qz" not found
	I0716 17:10:46.136890   14824 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.199833   14824 pod_ready.go:92] pod "etcd-addons-933500" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.200477   14824 pod_ready.go:81] duration metric: took 63.5864ms for pod "etcd-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.200477   14824 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.206492   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:46.223849   14824 pod_ready.go:92] pod "kube-apiserver-addons-933500" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.223942   14824 pod_ready.go:81] duration metric: took 23.4281ms for pod "kube-apiserver-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.223942   14824 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.245910   14824 pod_ready.go:92] pod "kube-controller-manager-addons-933500" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.245910   14824 pod_ready.go:81] duration metric: took 21.9682ms for pod "kube-controller-manager-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.245910   14824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jqj96" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.304554   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:46.394174   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:46.501590   14824 pod_ready.go:92] pod "kube-proxy-jqj96" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.501689   14824 pod_ready.go:81] duration metric: took 255.778ms for pod "kube-proxy-jqj96" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.501689   14824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.702245   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:46.805416   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:46.899610   14824 pod_ready.go:92] pod "kube-scheduler-addons-933500" in "kube-system" namespace has status "Ready":"True"
	I0716 17:10:46.899666   14824 pod_ready.go:81] duration metric: took 397.9749ms for pod "kube-scheduler-addons-933500" in "kube-system" namespace to be "Ready" ...
	I0716 17:10:46.899702   14824 pod_ready.go:38] duration metric: took 47.8832769s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:10:46.899702   14824 api_server.go:52] waiting for apiserver process to appear ...
	I0716 17:10:46.900718   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:46.913869   14824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:10:46.950186   14824 api_server.go:72] duration metric: took 51.2843381s to wait for apiserver process to appear ...
	I0716 17:10:46.950186   14824 api_server.go:88] waiting for apiserver healthz status ...
	I0716 17:10:46.950186   14824 api_server.go:253] Checking apiserver healthz at https://172.27.174.219:8443/healthz ...
	I0716 17:10:46.957713   14824 api_server.go:279] https://172.27.174.219:8443/healthz returned 200:
	ok
	I0716 17:10:46.959412   14824 api_server.go:141] control plane version: v1.30.2
	I0716 17:10:46.960301   14824 api_server.go:131] duration metric: took 10.1154ms to wait for apiserver health ...
	I0716 17:10:46.960301   14824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 17:10:47.109426   14824 system_pods.go:59] 18 kube-system pods found
	I0716 17:10:47.109426   14824 system_pods.go:61] "coredns-7db6d8ff4d-g8bxj" [937066bf-c9b8-4026-984a-89fc0ce0d7ba] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "csi-hostpath-attacher-0" [5a1d5a6a-889c-4ab2-957b-ee89ff6806ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0716 17:10:47.109426   14824 system_pods.go:61] "csi-hostpath-resizer-0" [911a445f-b88f-4fbf-9c8b-e24f4094d640] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0716 17:10:47.109426   14824 system_pods.go:61] "csi-hostpathplugin-p747v" [335749b4-9c77-4fe1-b02d-e1ef8baf920c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0716 17:10:47.109426   14824 system_pods.go:61] "etcd-addons-933500" [6bc5eb7c-cf03-4763-852a-ee84661f426c] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "kube-apiserver-addons-933500" [828e95c9-a67e-4f5d-b447-e34c977604b5] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "kube-controller-manager-addons-933500" [c86d3fd1-f400-4450-ac67-56f279bd542e] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "kube-ingress-dns-minikube" [27fd58d9-b012-42a5-852f-1bc6af23eae7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0716 17:10:47.109426   14824 system_pods.go:61] "kube-proxy-jqj96" [fb4e09f0-0c87-4671-afba-eefc34232a29] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "kube-scheduler-addons-933500" [20365058-cba6-486e-8984-27463ca3e32b] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "metrics-server-c59844bb4-2dk8v" [63f4a937-d36e-49c0-8d77-8d7e2a836757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0716 17:10:47.109426   14824 system_pods.go:61] "nvidia-device-plugin-daemonset-wdv2m" [4b9c108e-7da6-4ffc-a3ca-74423d63615e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0716 17:10:47.109426   14824 system_pods.go:61] "registry-proxy-8t5wx" [236014e2-fc14-4543-9d47-a39a26a84993] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0716 17:10:47.109426   14824 system_pods.go:61] "registry-tzs5b" [6a01ed28-a251-4305-b912-3bdc64473548] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0716 17:10:47.109426   14824 system_pods.go:61] "snapshot-controller-745499f584-gp6ts" [7f98a491-571c-4e36-b28e-bf9379130487] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0716 17:10:47.109426   14824 system_pods.go:61] "snapshot-controller-745499f584-r97tb" [59f3f7ca-56e6-44cf-8bd8-773a6593bc39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0716 17:10:47.109426   14824 system_pods.go:61] "storage-provisioner" [594c5746-13cb-46bc-9e34-9a622a1bf939] Running
	I0716 17:10:47.109426   14824 system_pods.go:61] "tiller-deploy-6677d64bcd-7npsr" [5579b661-bacb-48f9-9587-7eed216c2535] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0716 17:10:47.109426   14824 system_pods.go:74] duration metric: took 149.1242ms to wait for pod list to return data ...
	I0716 17:10:47.109426   14824 default_sa.go:34] waiting for default service account to be created ...
	I0716 17:10:47.213367   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:47.290264   14824 default_sa.go:45] found service account: "default"
	I0716 17:10:47.290327   14824 default_sa.go:55] duration metric: took 180.9ms for default service account to be created ...
	I0716 17:10:47.290327   14824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 17:10:47.317121   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:47.402240   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:47.508403   14824 system_pods.go:86] 18 kube-system pods found
	I0716 17:10:47.508403   14824 system_pods.go:89] "coredns-7db6d8ff4d-g8bxj" [937066bf-c9b8-4026-984a-89fc0ce0d7ba] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "csi-hostpath-attacher-0" [5a1d5a6a-889c-4ab2-957b-ee89ff6806ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0716 17:10:47.508403   14824 system_pods.go:89] "csi-hostpath-resizer-0" [911a445f-b88f-4fbf-9c8b-e24f4094d640] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0716 17:10:47.508403   14824 system_pods.go:89] "csi-hostpathplugin-p747v" [335749b4-9c77-4fe1-b02d-e1ef8baf920c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0716 17:10:47.508403   14824 system_pods.go:89] "etcd-addons-933500" [6bc5eb7c-cf03-4763-852a-ee84661f426c] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "kube-apiserver-addons-933500" [828e95c9-a67e-4f5d-b447-e34c977604b5] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "kube-controller-manager-addons-933500" [c86d3fd1-f400-4450-ac67-56f279bd542e] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "kube-ingress-dns-minikube" [27fd58d9-b012-42a5-852f-1bc6af23eae7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0716 17:10:47.508403   14824 system_pods.go:89] "kube-proxy-jqj96" [fb4e09f0-0c87-4671-afba-eefc34232a29] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "kube-scheduler-addons-933500" [20365058-cba6-486e-8984-27463ca3e32b] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "metrics-server-c59844bb4-2dk8v" [63f4a937-d36e-49c0-8d77-8d7e2a836757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0716 17:10:47.508403   14824 system_pods.go:89] "nvidia-device-plugin-daemonset-wdv2m" [4b9c108e-7da6-4ffc-a3ca-74423d63615e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0716 17:10:47.508403   14824 system_pods.go:89] "registry-proxy-8t5wx" [236014e2-fc14-4543-9d47-a39a26a84993] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0716 17:10:47.508403   14824 system_pods.go:89] "registry-tzs5b" [6a01ed28-a251-4305-b912-3bdc64473548] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0716 17:10:47.508403   14824 system_pods.go:89] "snapshot-controller-745499f584-gp6ts" [7f98a491-571c-4e36-b28e-bf9379130487] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0716 17:10:47.508403   14824 system_pods.go:89] "snapshot-controller-745499f584-r97tb" [59f3f7ca-56e6-44cf-8bd8-773a6593bc39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0716 17:10:47.508403   14824 system_pods.go:89] "storage-provisioner" [594c5746-13cb-46bc-9e34-9a622a1bf939] Running
	I0716 17:10:47.508403   14824 system_pods.go:89] "tiller-deploy-6677d64bcd-7npsr" [5579b661-bacb-48f9-9587-7eed216c2535] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0716 17:10:47.508403   14824 system_pods.go:126] duration metric: took 218.076ms to wait for k8s-apps to be running ...
	I0716 17:10:47.508403   14824 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 17:10:47.521045   14824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 17:10:47.552524   14824 system_svc.go:56] duration metric: took 43.8735ms WaitForService to wait for kubelet
	I0716 17:10:47.552524   14824 kubeadm.go:582] duration metric: took 51.8866744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:10:47.552589   14824 node_conditions.go:102] verifying NodePressure condition ...
	I0716 17:10:47.702681   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:47.704731   14824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 17:10:47.704852   14824 node_conditions.go:123] node cpu capacity is 2
	I0716 17:10:47.704926   14824 node_conditions.go:105] duration metric: took 152.2922ms to run NodePressure ...
	I0716 17:10:47.704926   14824 start.go:241] waiting for startup goroutines ...
	I0716 17:10:47.799249   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:47.893248   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:48.210530   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:48.305485   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:48.400330   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:48.702919   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:48.816181   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:48.908111   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:49.208117   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:49.308377   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:49.396802   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:49.860014   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:49.860796   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:49.909508   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:50.212157   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:50.309095   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:50.400976   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:50.701509   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:50.813662   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:50.906187   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:51.208617   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:51.305001   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:51.398924   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:51.701786   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:51.819493   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:51.908482   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:52.212768   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:52.305939   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:52.396186   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:52.797913   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:52.807703   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:53.101439   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:53.211503   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:53.423096   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:53.423527   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:53.705631   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:53.801942   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:53.894437   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:54.213149   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:54.309434   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:54.401582   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:54.703971   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:54.801320   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:54.893949   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:55.213206   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:55.312081   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:55.402672   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:55.919562   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:56.004075   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:56.004734   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:56.213500   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:56.309188   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:56.410072   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:56.702772   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:56.814160   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:56.998568   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:57.271089   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:57.312128   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:57.403670   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:57.703313   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:57.813184   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:57.906193   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:58.211852   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:58.308947   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:58.401019   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:58.705058   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:58.800622   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:58.892921   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:59.211145   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:59.307909   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:10:59.401769   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:59.906106   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:10:59.908094   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:10:59.908094   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:00.202066   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:00.320174   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:00.406242   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:00.704617   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:00.814461   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:00.905978   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:01.217860   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:01.308817   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:01.399330   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:01.715039   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:01.814292   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:01.904793   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:02.254685   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:02.315113   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:02.398914   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:02.731139   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:02.829018   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:02.924289   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:03.208922   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:03.306721   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:03.397720   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:03.702137   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:03.814025   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:03.905945   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:04.213404   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:04.309311   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:04.401288   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:04.704083   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:04.812991   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:04.905577   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:05.206896   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:05.304130   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:05.396030   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:05.714619   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:05.808897   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:05.902105   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:06.205693   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:06.301927   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:06.394304   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:06.712315   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:06.811508   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:06.903185   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:07.209018   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:07.304238   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:07.397590   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:07.712475   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:07.808704   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:07.901282   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:08.217840   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:08.312525   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:08.404758   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:08.706525   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:08.802996   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:08.894726   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:09.212816   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:09.312887   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:09.402980   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:09.704855   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:09.800409   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:09.907848   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:10.209250   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:10.309154   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:10.400683   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:10.703199   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:10.813799   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:10.907156   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:11.207328   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:11.302657   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:11.395401   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:11.713505   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:11.811656   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:11.901542   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:12.206896   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:12.303167   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:12.409541   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:12.702985   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:12.809031   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:12.897353   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:13.304933   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:13.616135   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:13.619386   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:13.852673   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:13.914411   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:13.917142   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:14.646110   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:14.649804   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:14.651011   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:14.714575   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:14.810129   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:14.900406   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:15.215214   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:15.312962   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:15.409172   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:15.722359   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0716 17:11:15.813509   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:15.903945   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:16.209679   14824 kapi.go:107] duration metric: took 55.5150379s to wait for kubernetes.io/minikube-addons=registry ...
	I0716 17:11:16.302053   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:16.408920   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:16.823629   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:16.898815   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:17.312777   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:17.412402   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:17.803030   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:17.895799   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:18.310351   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:18.404025   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:18.802671   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:18.896676   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:19.311308   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:19.402328   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:19.814942   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:19.910272   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:20.304006   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:20.399139   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:20.811043   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:20.900951   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:21.301803   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:21.402104   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:21.806823   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:21.900505   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:22.315345   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:22.404989   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:22.807987   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:22.897484   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:23.312655   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:23.403332   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:23.816034   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:23.908293   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:24.311432   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:24.401440   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:24.816114   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:24.906555   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:25.304057   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:25.398059   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:25.819521   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:25.903736   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:26.303052   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:26.396701   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:26.811704   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:26.904309   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:27.308385   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:27.403801   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:27.803907   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:27.898275   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:28.311746   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:28.401906   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:28.802490   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:28.894868   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:29.308953   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:29.401992   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:29.812914   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:29.908255   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:30.306205   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:30.400606   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:30.814066   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:30.907352   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:31.306144   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:31.397029   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:31.810102   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:31.902017   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:32.314420   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:32.404417   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:32.804282   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:32.894836   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:33.310197   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:33.402726   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:34.152010   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:34.152169   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:34.504588   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:34.505002   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:34.812010   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:34.904708   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:35.315338   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:35.517372   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:35.806676   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:35.895753   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:36.310634   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:36.404084   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:36.804582   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:36.895918   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:37.311795   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:37.404477   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:37.811307   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:37.899318   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:38.313940   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:38.405991   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:38.816576   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:38.906493   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:39.315748   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:39.408949   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:39.815877   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:39.907133   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:40.320821   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:40.407964   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:40.807010   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:40.900843   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:41.313463   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:41.396184   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:41.809110   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:41.901356   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:42.313109   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:42.404217   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:42.810293   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:42.903342   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:43.318292   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:43.402063   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:43.813560   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:43.901875   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:44.321967   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:44.425417   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:44.806971   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:44.895843   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:45.312739   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:45.407815   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:45.811882   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:45.895802   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:46.325595   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:46.404404   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:46.800967   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:46.894560   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:47.309412   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:47.401008   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:47.821193   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:47.906811   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:48.304376   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:48.397092   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:48.808493   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:48.900084   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:49.300658   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:49.408104   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:49.815193   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:49.998628   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:50.303834   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:50.394368   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:50.811088   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:50.905404   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:51.306286   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:51.400112   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:51.815464   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:51.913040   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:52.510950   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:52.511174   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:52.802966   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:52.907120   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:53.309009   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:53.396312   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:53.815510   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:53.912536   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:54.338675   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:54.415938   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:54.806323   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:54.900176   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:55.314733   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:55.643319   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:55.861472   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:55.895861   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:56.311901   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:56.404115   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:56.803175   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:56.894762   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:57.315242   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:57.406784   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:57.810041   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:57.902152   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:58.314349   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:58.408467   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:58.803465   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:58.896004   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:59.311293   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:59.402737   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:11:59.813650   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:11:59.906241   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:00.310992   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:00.396990   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:00.809959   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:00.901989   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:01.313583   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:01.404458   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:01.805485   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:01.911444   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:02.464166   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:02.464588   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:02.803102   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:02.910990   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:03.310242   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:03.397400   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:03.810392   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:03.902599   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:04.313939   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:04.407774   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:04.802975   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:04.896991   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:05.311247   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:05.403333   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:05.813649   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:05.896080   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:06.300913   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:06.405818   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:07.046075   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:07.053052   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:07.307517   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:07.398087   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:07.818197   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:07.901318   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:08.301695   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:08.394982   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:08.810221   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:08.901213   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:09.302017   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:09.394837   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:09.810487   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:09.903160   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:10.303708   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:10.395670   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:10.811211   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:11.161928   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:11.301523   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:11.398769   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:11.811268   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:11.902198   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:12.314253   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:12.420018   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:12.807160   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:12.898261   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:13.311659   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:13.405064   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:14.015385   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:14.016373   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:14.311698   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:14.403897   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:14.806685   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:14.909346   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:15.304480   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:15.396785   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:15.809577   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:15.904170   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:16.313125   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:16.405765   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:16.805032   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:16.897412   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:17.306834   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:17.400812   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:17.814029   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:17.906043   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:18.303995   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:18.397102   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:18.996828   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:18.997157   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:19.315560   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:19.412198   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:19.806875   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:19.896144   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:20.317097   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:20.412071   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:20.802147   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:20.895854   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:21.308504   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:21.404486   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:21.829587   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:21.911431   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:22.312640   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:22.399449   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:22.813852   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:22.912242   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:23.308302   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:23.399961   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:23.817565   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:23.925737   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:24.304807   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:24.398033   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:24.814262   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:24.906935   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:25.306840   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:25.400181   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:25.815845   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:25.908632   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:26.308375   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:26.399102   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:26.814252   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:26.905843   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:27.315586   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:27.401927   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:27.801830   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:27.896396   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:28.306257   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:28.396411   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:28.804933   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:28.915669   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:29.314025   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:29.406411   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:29.815973   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:29.909012   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:30.308243   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:30.407957   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:30.804027   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:30.908316   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:31.311441   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:31.414128   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:31.801154   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:31.909343   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:32.311203   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:32.410424   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:32.803930   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:32.896678   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:33.313947   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:33.959320   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:33.962710   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:33.968442   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:34.313771   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:34.406130   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:34.803262   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:34.898008   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:35.308412   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:35.397641   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:35.816497   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:35.907327   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:36.304671   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:36.399551   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:37.165501   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:37.168659   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:37.356904   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:37.414606   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:37.808477   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:37.899567   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:38.318466   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:38.402516   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:38.802379   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:38.896370   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:39.313937   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:39.404319   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:39.808965   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:39.905920   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:40.357720   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:40.401485   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:40.812344   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:40.906510   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:41.317159   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:41.407413   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:41.807804   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:41.901063   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:42.302671   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:42.394017   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:42.807099   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:42.899540   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:43.364484   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:43.559020   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:43.799784   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:43.894334   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:44.314951   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:44.404928   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:44.805370   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:44.899482   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:45.314486   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:45.409649   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:45.811486   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:45.901358   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:46.314588   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:46.406362   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:46.802132   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:46.895985   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:47.307596   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:47.398749   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:47.811334   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:47.901224   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:48.304653   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:48.396757   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:48.812733   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:48.906392   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:49.305511   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:49.398287   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:49.814083   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:49.907749   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:50.313796   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:50.406417   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:50.802888   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:50.894114   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:51.301859   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:51.400225   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:51.831230   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:51.911292   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:52.310931   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:52.401924   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:52.801014   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:52.894481   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:53.310706   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:53.402654   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:53.819130   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:53.907132   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:54.311748   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:54.401979   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:54.801241   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:54.893871   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:55.311480   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:55.402470   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:55.806932   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:55.896181   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:56.308928   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:56.403420   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:56.906864   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:56.909863   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:57.369522   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:57.407091   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:57.804079   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:57.892758   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:58.310416   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:58.404502   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:58.802122   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:58.905314   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:59.308206   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:59.401400   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:12:59.803740   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:12:59.895693   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:00.308589   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:00.398417   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:00.801470   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:00.893721   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:01.313626   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:01.407060   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:01.807696   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:01.900258   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:02.316433   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:02.406672   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:02.806983   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:02.899886   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:03.310970   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:03.394623   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:03.812575   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:03.903993   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:04.469255   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:04.472250   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:04.813606   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:04.905634   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:05.311104   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:05.546149   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:05.810444   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:05.907690   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:06.517539   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:06.522058   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:06.826796   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:06.997185   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:07.311023   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:07.400194   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:07.811726   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:07.902930   14824 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0716 17:13:08.301608   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:08.395359   14824 kapi.go:107] duration metric: took 2m33.5078713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0716 17:13:08.811172   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:09.306568   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:09.809841   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:10.306302   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:10.809246   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:11.307633   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:11.810680   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:12.315109   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:12.813825   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:13.308290   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:13.814051   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:14.320915   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0716 17:13:14.806115   14824 kapi.go:107] duration metric: took 2m38.013716s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0716 17:13:23.432277   14824 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0716 17:13:23.432358   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:23.923036   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:24.411180   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:24.911500   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:25.417239   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:25.918573   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:26.416148   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:26.918498   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:27.419164   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:27.923433   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:28.422433   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:28.920559   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:29.423101   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:29.921863   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:30.422466   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:30.911032   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:31.410473   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:31.913207   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:32.409284   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:32.922444   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:33.422615   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:33.923972   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:34.410363   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:34.914415   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:35.416891   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:35.917762   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:36.421782   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:36.919533   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:37.423325   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:37.911176   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:38.424432   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:38.912617   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:39.415409   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:39.917602   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:40.411440   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:40.909635   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:41.418102   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:41.921869   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:42.417765   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:42.924089   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:43.413457   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:43.922915   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:44.424224   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:44.921436   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:45.421814   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:45.914535   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:46.420963   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:46.912945   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:47.414601   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:47.917697   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:48.417151   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:48.915043   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:49.412202   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:49.912863   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:50.410504   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:50.925493   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:51.414508   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:51.920346   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:52.420560   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:52.922315   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:53.423170   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:53.928314   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:54.420469   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:54.922333   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:55.411472   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:55.922561   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:56.412218   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:56.916570   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:57.413340   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:57.917958   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:58.413516   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:58.922979   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:59.417585   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:13:59.922691   14824 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0716 17:14:00.418366   14824 kapi.go:107] duration metric: took 3m22.014284s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0716 17:14:00.421749   14824 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-933500 cluster.
	I0716 17:14:00.424644   14824 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0716 17:14:00.427798   14824 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0716 17:14:00.430616   14824 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, volcano, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0716 17:14:00.436952   14824 addons.go:510] duration metric: took 4m4.770322s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin volcano ingress-dns metrics-server helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0716 17:14:00.436952   14824 start.go:246] waiting for cluster config update ...
	I0716 17:14:00.436952   14824 start.go:255] writing updated cluster config ...
	I0716 17:14:00.450985   14824 ssh_runner.go:195] Run: rm -f paused
	I0716 17:14:00.719917   14824 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0716 17:14:00.723640   14824 out.go:177] * Done! kubectl is now configured to use "addons-933500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 00:14:49 addons-933500 dockerd[1431]: time="2024-07-17T00:14:49.774610512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:49 addons-933500 dockerd[1431]: time="2024-07-17T00:14:49.774871813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:50 addons-933500 dockerd[1425]: time="2024-07-17T00:14:50.220074566Z" level=info msg="ignoring event" container=d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:14:50 addons-933500 dockerd[1431]: time="2024-07-17T00:14:50.222524072Z" level=info msg="shim disconnected" id=d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47 namespace=moby
	Jul 17 00:14:50 addons-933500 dockerd[1431]: time="2024-07-17T00:14:50.222664673Z" level=warning msg="cleaning up after shim disconnected" id=d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47 namespace=moby
	Jul 17 00:14:50 addons-933500 dockerd[1431]: time="2024-07-17T00:14:50.222681973Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 00:14:50 addons-933500 dockerd[1425]: time="2024-07-17T00:14:50.259447868Z" level=warning msg="failed to close stdin: NotFound: task d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47 not found: not found"
	Jul 17 00:14:52 addons-933500 dockerd[1425]: time="2024-07-17T00:14:52.329872642Z" level=info msg="ignoring event" container=f8f0bf12928ce8e656e722c4e05b0e87a5cf5d97cb6a215a4bdf7d360a2fe409 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:14:52 addons-933500 dockerd[1431]: time="2024-07-17T00:14:52.329776241Z" level=info msg="shim disconnected" id=f8f0bf12928ce8e656e722c4e05b0e87a5cf5d97cb6a215a4bdf7d360a2fe409 namespace=moby
	Jul 17 00:14:52 addons-933500 dockerd[1431]: time="2024-07-17T00:14:52.330514443Z" level=warning msg="cleaning up after shim disconnected" id=f8f0bf12928ce8e656e722c4e05b0e87a5cf5d97cb6a215a4bdf7d360a2fe409 namespace=moby
	Jul 17 00:14:52 addons-933500 dockerd[1431]: time="2024-07-17T00:14:52.330545643Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 00:14:53 addons-933500 dockerd[1431]: time="2024-07-17T00:14:53.709469280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:14:53 addons-933500 dockerd[1431]: time="2024-07-17T00:14:53.709606781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:14:53 addons-933500 dockerd[1431]: time="2024-07-17T00:14:53.709625981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:53 addons-933500 dockerd[1431]: time="2024-07-17T00:14:53.709909782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:53 addons-933500 cri-dockerd[1326]: time="2024-07-17T00:14:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7543647f2dbc80805b12ead9c23d296cf6fb8bfa6bf2d538396ae13707cac5d0/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.267458747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.267656448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.267677648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.267962849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:14:54 addons-933500 dockerd[1425]: time="2024-07-17T00:14:54.636015829Z" level=info msg="ignoring event" container=1e4a2fb1fcb876ceded595ad4b998a7cb9bef93c9ef4689f8bc9acde749bd17b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.644874570Z" level=info msg="shim disconnected" id=1e4a2fb1fcb876ceded595ad4b998a7cb9bef93c9ef4689f8bc9acde749bd17b namespace=moby
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.644969770Z" level=warning msg="cleaning up after shim disconnected" id=1e4a2fb1fcb876ceded595ad4b998a7cb9bef93c9ef4689f8bc9acde749bd17b namespace=moby
	Jul 17 00:14:54 addons-933500 dockerd[1431]: time="2024-07-17T00:14:54.644987370Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 00:14:54 addons-933500 dockerd[1425]: time="2024-07-17T00:14:54.674071703Z" level=warning msg="failed to close stdin: NotFound: task 1e4a2fb1fcb876ceded595ad4b998a7cb9bef93c9ef4689f8bc9acde749bd17b not found: not found"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	1e4a2fb1fcb87       98f6c3b32d565                                                                                                                                2 seconds ago        Exited              helm-test                                0                   7543647f2dbc8       helm-test
	90dfeb5655f2c       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                                                                7 seconds ago        Running             task-pv-container                        0                   d831ca945d588       task-pv-pod
	5a7527d4d16cd       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                                                                12 seconds ago       Running             nginx                                    0                   22ec835a77ed5       test-job-nginx-0
	703ad2dc6c08c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734                            26 seconds ago       Exited              gadget                                   4                   09bdd1659a8cb       gadget-fwwtf
	af4c183898e96       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                                        27 seconds ago       Running             headlamp                                 0                   9c3b0812963a6       headlamp-7867546754-nsdnt
	b473ed685152a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 57 seconds ago       Running             gcp-auth                                 0                   92e956d5490c1       gcp-auth-5db96cd9b4-7xbvq
	a8f4a9d27d0a3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   02228cd859ff7       csi-hostpathplugin-p747v
	ce381fc4e6788       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   02228cd859ff7       csi-hostpathplugin-p747v
	eb6cc2878b7d3       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   5332ced46f4c8       ingress-nginx-controller-768f948f8f-nh9c6
	b2889c770f4f3       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   02228cd859ff7       csi-hostpathplugin-p747v
	cfb4f74092d54       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         2 minutes ago        Running             admission                                0                   ec59845bdcd18       volcano-admission-5f7844f7bc-r5vwd
	b423f9c7849a9       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   02228cd859ff7       csi-hostpathplugin-p747v
	3388dffd4cd58       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   02228cd859ff7       csi-hostpathplugin-p747v
	c3727ec7e84f7       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   81bc24e410107       csi-hostpath-resizer-0
	6236d011280f7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   02228cd859ff7       csi-hostpathplugin-p747v
	c83d16100ec74       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   d6f97ea437176       csi-hostpath-attacher-0
	1d031b2d1f05d       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               2 minutes ago        Running             volcano-scheduler                        0                   d16ee4f36347c       volcano-scheduler-844f6db89b-2dl4x
	b9425e4a4ed9a       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      2 minutes ago        Running             volcano-controllers                      0                   1b9405a0ad3c2       volcano-controllers-59cb4746db-grcdb
	a66346a0695c5       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   18abe5ccadd18       ingress-nginx-admission-patch-bffc7
	f809fa2ecf182       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   5795e5aab44f9       ingress-nginx-admission-create-txtv2
	80bc977efb5d0       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   3810c11b397b5       local-path-provisioner-8d985888d-rs8z8
	592ccebfa73e0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   26aface2882ae       snapshot-controller-745499f584-gp6ts
	34fbe93887384       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   695d31499c6f0       snapshot-controller-745499f584-r97tb
	ce77d31759531       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   a3365fcadca1b       tiller-deploy-6677d64bcd-7npsr
	aef70c8adb167       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        2 minutes ago        Running             yakd                                     0                   05cf1159eab5f       yakd-dashboard-799879c74f-pshxt
	935b9cd51cade       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   fa4965a0d3456       metrics-server-c59844bb4-2dk8v
	e5d8590cc6328       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             3 minutes ago        Running             minikube-ingress-dns                     0                   b494fa5c32e93       kube-ingress-dns-minikube
	669c0e2873d8e       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               4 minutes ago        Running             cloud-spanner-emulator                   0                   a14b962db1824       cloud-spanner-emulator-6fcd4f6f98-4j5wd
	9df03d52073f9       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   20b21a4516272       storage-provisioner
	b81566a8cf62d       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   a1027e87382d5       coredns-7db6d8ff4d-g8bxj
	ea134cc656c80       53c535741fb44                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   fb5b1d8df3a83       kube-proxy-jqj96
	a6cb62cff39a4       7820c83aa1394                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   e212d12f24c01       kube-scheduler-addons-933500
	3e9ac8c39d8dd       56ce0fd9fb532                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   bea9282bd7ab7       kube-apiserver-addons-933500
	c7e8a7f34e6a1       e874818b3caac                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   ddebd434899e4       kube-controller-manager-addons-933500
	a6799eff8b62f       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   577de81661216       etcd-addons-933500
	
	
	==> controller_ingress [eb6cc2878b7d] <==
	W0717 00:13:07.395271       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0717 00:13:07.395836       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0717 00:13:07.405303       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.2" state="clean" commit="39683505b630ff2121012f3c5b16215a1449d5ed" platform="linux/amd64"
	I0717 00:13:07.609536       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0717 00:13:07.639341       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0717 00:13:07.663082       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0717 00:13:07.679701       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"61d499cc-a924-434b-bce2-7bd4712518c4", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0717 00:13:07.683034       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"cc07f530-eb06-4329-a4fd-6cfc35fe8eea", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0717 00:13:07.683110       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"a29e2c27-6671-4277-82f5-d97ab49fb51e", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0717 00:13:08.864922       7 nginx.go:307] "Starting NGINX process"
	I0717 00:13:08.865016       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0717 00:13:08.866566       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0717 00:13:08.866943       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0717 00:13:08.893415       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0717 00:13:08.893849       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-nh9c6"
	I0717 00:13:08.908038       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-nh9c6" node="addons-933500"
	I0717 00:13:08.967041       7 controller.go:210] "Backend successfully reloaded"
	I0717 00:13:08.967468       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0717 00:13:08.967967       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-nh9c6", UID:"7b48e3ee-05ee-43aa-b599-8fe386e3a394", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [b81566a8cf62] <==
	[INFO] 10.244.0.7:49160 - 17015 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001492607s
	[INFO] 10.244.0.7:36261 - 31429 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000737003s
	[INFO] 10.244.0.7:36261 - 54467 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000869904s
	[INFO] 10.244.0.7:47005 - 29283 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163s
	[INFO] 10.244.0.7:47005 - 2918 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000340602s
	[INFO] 10.244.0.7:45552 - 16418 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000193201s
	[INFO] 10.244.0.7:45552 - 46628 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196401s
	[INFO] 10.244.0.7:44283 - 6075 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152601s
	[INFO] 10.244.0.7:44283 - 20869 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000288902s
	[INFO] 10.244.0.7:37922 - 44849 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073601s
	[INFO] 10.244.0.7:37922 - 55615 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000429502s
	[INFO] 10.244.0.7:60556 - 49048 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000946s
	[INFO] 10.244.0.7:60556 - 21149 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177401s
	[INFO] 10.244.0.7:41958 - 49937 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000278501s
	[INFO] 10.244.0.7:41958 - 49439 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001466s
	[INFO] 10.244.0.26:38559 - 9795 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002085609s
	[INFO] 10.244.0.26:47547 - 48768 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002581412s
	[INFO] 10.244.0.26:60010 - 45526 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000268502s
	[INFO] 10.244.0.26:36006 - 52494 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000866s
	[INFO] 10.244.0.26:44439 - 12414 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001545s
	[INFO] 10.244.0.26:34890 - 9751 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148901s
	[INFO] 10.244.0.26:51672 - 12577 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.00201171s
	[INFO] 10.244.0.26:44829 - 61535 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002622212s
	[INFO] 10.244.0.27:52069 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000282601s
	[INFO] 10.244.0.27:34165 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001859s
	
	
	==> describe nodes <==
	Name:               addons-933500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-933500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-933500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_09_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-933500
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-933500"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:09:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-933500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:14:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:14:48 +0000   Wed, 17 Jul 2024 00:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:14:48 +0000   Wed, 17 Jul 2024 00:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:14:48 +0000   Wed, 17 Jul 2024 00:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:14:48 +0000   Wed, 17 Jul 2024 00:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.174.219
	  Hostname:    addons-933500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 4650c2b6f8cd46aba1f8e0a28c56cd05
	  System UUID:                8d347f19-8495-0447-a951-7b05319e3639
	  Boot ID:                    1b7a53b2-e46e-4161-bb7b-800d07b6db87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-4j5wd      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  gadget                      gadget-fwwtf                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-7xbvq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  headlamp                    headlamp-7867546754-nsdnt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-nh9c6    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-g8bxj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpathplugin-p747v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-933500                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-apiserver-addons-933500                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-addons-933500        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-proxy-jqj96                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-addons-933500                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 metrics-server-c59844bb4-2dk8v               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 snapshot-controller-745499f584-gp6ts         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 snapshot-controller-745499f584-r97tb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 tiller-deploy-6677d64bcd-7npsr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  local-path-storage          local-path-provisioner-8d985888d-rs8z8       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  my-volcano                  test-job-nginx-0                             1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  volcano-system              volcano-admission-5f7844f7bc-r5vwd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-controllers-59cb4746db-grcdb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-scheduler-844f6db89b-2dl4x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  yakd-dashboard              yakd-dashboard-799879c74f-pshxt              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1950m (97%!)(MISSING)  1 (50%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node addons-933500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node addons-933500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node addons-933500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet          Node addons-933500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet          Node addons-933500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet          Node addons-933500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m15s                  kubelet          Node addons-933500 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node addons-933500 event: Registered Node addons-933500 in Controller
	
	
	==> dmesg <==
	[ +10.037428] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.007205] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.045002] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.048656] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.190706] kauditd_printk_skb: 66 callbacks suppressed
	[  +8.174923] kauditd_printk_skb: 35 callbacks suppressed
	[Jul17 00:11] kauditd_printk_skb: 2 callbacks suppressed
	[ +31.422090] hrtimer: interrupt took 2693111 ns
	[Jul17 00:12] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.190302] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.598758] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.089657] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.090959] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.510768] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.198921] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.610955] kauditd_printk_skb: 41 callbacks suppressed
	[Jul17 00:13] kauditd_printk_skb: 38 callbacks suppressed
	[ +28.413357] kauditd_printk_skb: 48 callbacks suppressed
	[ +12.954541] kauditd_printk_skb: 22 callbacks suppressed
	[Jul17 00:14] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.397240] kauditd_printk_skb: 31 callbacks suppressed
	[  +8.980293] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.218451] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.712683] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.930173] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [a6799eff8b62] <==
	{"level":"info","ts":"2024-07-17T00:14:21.056459Z","caller":"traceutil/trace.go:171","msg":"trace[1303002591] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1688; }","duration":"207.025228ms","start":"2024-07-17T00:14:20.849424Z","end":"2024-07-17T00:14:21.056449Z","steps":["trace[1303002591] 'agreement among raft nodes before linearized reading'  (duration: 206.960528ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:14:21.056432Z","caller":"traceutil/trace.go:171","msg":"trace[228886132] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1688; }","duration":"216.084764ms","start":"2024-07-17T00:14:20.840341Z","end":"2024-07-17T00:14:21.056426Z","steps":["trace[228886132] 'agreement among raft nodes before linearized reading'  (duration: 216.009764ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:14:28.942693Z","caller":"traceutil/trace.go:171","msg":"trace[1722732084] transaction","detail":"{read_only:false; response_revision:1712; number_of_response:1; }","duration":"104.763516ms","start":"2024-07-17T00:14:28.837907Z","end":"2024-07-17T00:14:28.942671Z","steps":["trace[1722732084] 'process raft request'  (duration: 104.659815ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:14:33.419385Z","caller":"traceutil/trace.go:171","msg":"trace[1034814042] transaction","detail":"{read_only:false; response_revision:1732; number_of_response:1; }","duration":"133.335124ms","start":"2024-07-17T00:14:33.286026Z","end":"2024-07-17T00:14:33.419361Z","steps":["trace[1034814042] 'process raft request'  (duration: 133.099323ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:14:36.300336Z","caller":"traceutil/trace.go:171","msg":"trace[304254402] linearizableReadLoop","detail":"{readStateIndex:1834; appliedIndex:1833; }","duration":"345.136357ms","start":"2024-07-17T00:14:35.95518Z","end":"2024-07-17T00:14:36.300316Z","steps":["trace[304254402] 'read index received'  (duration: 345.025356ms)","trace[304254402] 'applied index is now lower than readState.Index'  (duration: 110.501µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:14:36.300638Z","caller":"traceutil/trace.go:171","msg":"trace[1033713119] transaction","detail":"{read_only:false; response_revision:1746; number_of_response:1; }","duration":"405.873695ms","start":"2024-07-17T00:14:35.89471Z","end":"2024-07-17T00:14:36.300584Z","steps":["trace[1033713119] 'process raft request'  (duration: 405.430793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:36.300727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.545658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:14:36.300812Z","caller":"traceutil/trace.go:171","msg":"trace[1194421543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1746; }","duration":"345.675059ms","start":"2024-07-17T00:14:35.955125Z","end":"2024-07-17T00:14:36.3008Z","steps":["trace[1194421543] 'agreement among raft nodes before linearized reading'  (duration: 345.548258ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:36.300839Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:14:35.955031Z","time spent":"345.800059ms","remote":"127.0.0.1:57266","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-17T00:14:36.351232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:14:35.894649Z","time spent":"406.014795ms","remote":"127.0.0.1:57564","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-dd6vfo2b5ohhm5lhjvrzbufswe\" mod_revision:1703 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-dd6vfo2b5ohhm5lhjvrzbufswe\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-dd6vfo2b5ohhm5lhjvrzbufswe\" > >"}
	{"level":"info","ts":"2024-07-17T00:14:36.352Z","caller":"traceutil/trace.go:171","msg":"trace[1394664613] transaction","detail":"{read_only:false; response_revision:1747; number_of_response:1; }","duration":"281.989908ms","start":"2024-07-17T00:14:36.069996Z","end":"2024-07-17T00:14:36.351986Z","steps":["trace[1394664613] 'process raft request'  (duration: 281.891607ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:36.352282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.814003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/default/\" range_end:\"/registry/resourcequotas/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:14:36.352316Z","caller":"traceutil/trace.go:171","msg":"trace[1928628803] range","detail":"{range_begin:/registry/resourcequotas/default/; range_end:/registry/resourcequotas/default0; response_count:0; response_revision:1747; }","duration":"229.916103ms","start":"2024-07-17T00:14:36.12239Z","end":"2024-07-17T00:14:36.352307Z","steps":["trace[1928628803] 'agreement among raft nodes before linearized reading'  (duration: 229.793503ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:36.352587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.426428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-07-17T00:14:36.352616Z","caller":"traceutil/trace.go:171","msg":"trace[2031451587] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1747; }","duration":"363.500428ms","start":"2024-07-17T00:14:35.989108Z","end":"2024-07-17T00:14:36.352609Z","steps":["trace[2031451587] 'agreement among raft nodes before linearized reading'  (duration: 363.339328ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:36.352635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:14:35.989093Z","time spent":"363.536228ms","remote":"127.0.0.1:57364","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":286,"response size":30,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"info","ts":"2024-07-17T00:14:46.079988Z","caller":"traceutil/trace.go:171","msg":"trace[7266609] transaction","detail":"{read_only:false; response_revision:1785; number_of_response:1; }","duration":"511.592417ms","start":"2024-07-17T00:14:45.568343Z","end":"2024-07-17T00:14:46.079935Z","steps":["trace[7266609] 'process raft request'  (duration: 511.268517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:46.082354Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:14:45.568326Z","time spent":"511.741718ms","remote":"127.0.0.1:57444","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1783 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-17T00:14:46.104926Z","caller":"traceutil/trace.go:171","msg":"trace[491552789] linearizableReadLoop","detail":"{readStateIndex:1877; appliedIndex:1876; }","duration":"278.923319ms","start":"2024-07-17T00:14:45.825928Z","end":"2024-07-17T00:14:46.104851Z","steps":["trace[491552789] 'read index received'  (duration: 254.433455ms)","trace[491552789] 'applied index is now lower than readState.Index'  (duration: 24.488464ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:14:46.105632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.682921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3735"}
	{"level":"info","ts":"2024-07-17T00:14:46.105673Z","caller":"traceutil/trace.go:171","msg":"trace[1589551589] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1785; }","duration":"279.765621ms","start":"2024-07-17T00:14:45.825896Z","end":"2024-07-17T00:14:46.105661Z","steps":["trace[1589551589] 'agreement among raft nodes before linearized reading'  (duration: 279.30162ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:46.106241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.870687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6515"}
	{"level":"info","ts":"2024-07-17T00:14:46.106311Z","caller":"traceutil/trace.go:171","msg":"trace[1188767548] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1785; }","duration":"188.971287ms","start":"2024-07-17T00:14:45.917331Z","end":"2024-07-17T00:14:46.106302Z","steps":["trace[1188767548] 'agreement among raft nodes before linearized reading'  (duration: 188.825987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:14:46.107297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.398101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:14:46.107396Z","caller":"traceutil/trace.go:171","msg":"trace[845821529] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1785; }","duration":"155.522501ms","start":"2024-07-17T00:14:45.951864Z","end":"2024-07-17T00:14:46.107387Z","steps":["trace[845821529] 'agreement among raft nodes before linearized reading'  (duration: 155.371101ms)"],"step_count":1}
	
	
	==> gcp-auth [b473ed685152] <==
	2024/07/17 00:13:59 GCP Auth Webhook started!
	2024/07/17 00:14:12 Ready to marshal response ...
	2024/07/17 00:14:12 Ready to write response ...
	2024/07/17 00:14:16 Ready to marshal response ...
	2024/07/17 00:14:16 Ready to write response ...
	2024/07/17 00:14:16 Ready to marshal response ...
	2024/07/17 00:14:16 Ready to write response ...
	2024/07/17 00:14:16 Ready to marshal response ...
	2024/07/17 00:14:16 Ready to write response ...
	2024/07/17 00:14:17 Ready to marshal response ...
	2024/07/17 00:14:17 Ready to write response ...
	2024/07/17 00:14:18 Ready to marshal response ...
	2024/07/17 00:14:18 Ready to write response ...
	2024/07/17 00:14:27 Ready to marshal response ...
	2024/07/17 00:14:27 Ready to write response ...
	2024/07/17 00:14:37 Ready to marshal response ...
	2024/07/17 00:14:37 Ready to write response ...
	2024/07/17 00:14:53 Ready to marshal response ...
	2024/07/17 00:14:53 Ready to write response ...
	
	
	==> kernel <==
	 00:14:57 up 7 min,  0 users,  load average: 2.79, 2.54, 1.26
	Linux addons-933500 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e9ac8c39d8d] <==
	W0717 00:12:49.154996       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:50.236442       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:51.335479       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:52.419185       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:53.506014       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:54.527569       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:55.548899       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:56.560499       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:12:57.587233       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.19.68:443: connect: connection refused
	W0717 00:13:23.300321       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	E0717 00:13:23.300463       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	W0717 00:13:41.338386       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	E0717 00:13:41.338530       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	W0717 00:13:41.428624       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	E0717 00:13:41.428657       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.70.106:443: connect: connection refused
	I0717 00:14:16.640874       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.232.208"}
	I0717 00:14:17.584693       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0717 00:14:17.665678       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0717 00:14:46.083484       1 trace.go:236] Trace[2105626136]: "Update" accept:application/json, */*,audit-id:6f7b9b4f-aa7c-4f99-89a0-8e001cbdc952,client:172.27.174.219,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2024 00:14:45.566) (total time: 517ms):
	Trace[2105626136]: ["GuaranteedUpdate etcd3" audit-id:6f7b9b4f-aa7c-4f99-89a0-8e001cbdc952,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 516ms (00:14:45.567)
	Trace[2105626136]:  ---"Txn call completed" 515ms (00:14:46.083)]
	Trace[2105626136]: [517.002731ms] [517.002731ms] END
	E0717 00:14:50.178878       1 conn.go:339] Error on socket receive: read tcp 172.27.174.219:8443->172.27.160.1:64599: use of closed network connection
	E0717 00:14:50.200808       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 172.27.174.219:8443->10.244.0.30:59108: read: connection reset by peer
	I0717 00:14:57.119433       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c7e8a7f34e6a] <==
	I0717 00:13:45.708960       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:13:45.736785       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:13:45.985206       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:13:45.995452       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:13:46.011352       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:13:46.020093       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:13:46.731035       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:13:46.750662       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:13:46.776237       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:13:46.796247       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:14:00.218307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="26.571924ms"
	I0717 00:14:00.218940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="66.7µs"
	I0717 00:14:16.049104       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:14:16.070880       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:14:16.383214       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0717 00:14:16.383391       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0717 00:14:16.861359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="139.016552ms"
	I0717 00:14:16.976726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="114.71449ms"
	I0717 00:14:16.977045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="96.8µs"
	I0717 00:14:16.983334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="122.4µs"
	I0717 00:14:17.110664       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	I0717 00:14:30.578514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="201.7µs"
	I0717 00:14:30.660540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="19.881479ms"
	I0717 00:14:30.660954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="280.601µs"
	I0717 00:14:35.495542       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="15.3µs"
	
	
	==> kube-proxy [ea134cc656c8] <==
	I0717 00:10:08.741849       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:10:08.798051       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.174.219"]
	I0717 00:10:09.062801       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:10:09.062906       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:10:09.062938       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:10:09.092447       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:10:09.093011       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:10:09.093038       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:10:09.098690       1 config.go:192] "Starting service config controller"
	I0717 00:10:09.099218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:10:09.099918       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:10:09.100205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:10:09.112151       1 config.go:319] "Starting node config controller"
	I0717 00:10:09.112209       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:10:09.204159       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:10:09.204980       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:10:09.217225       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a6cb62cff39a] <==
	W0717 00:09:39.033389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:09:39.033622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:09:39.035566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:09:39.035814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:09:39.088860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:09:39.088985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:09:39.136112       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:09:39.136314       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:09:39.248544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:09:39.248649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:09:39.376079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:09:39.376310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:09:39.377879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:09:39.377955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:09:39.394573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:09:39.394867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:09:39.429291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:09:39.429547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:09:39.508341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:09:39.508564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:09:39.541898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:09:39.542101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:09:39.626513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:09:39.626889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:09:41.970888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:14:41 addons-933500 kubelet[2272]: I0717 00:14:41.676310    2272 scope.go:117] "RemoveContainer" containerID="81df0fcb0ceda66b43007c3e5a1a62001ad0aa8a479f850a2a47c35f44b1181d"
	Jul 17 00:14:41 addons-933500 kubelet[2272]: I0717 00:14:41.732557    2272 scope.go:117] "RemoveContainer" containerID="c44fa219442d2cc490321056ab46ae9c506a9a5ed29a10216cc2d8adbc37f9a3"
	Jul 17 00:14:41 addons-933500 kubelet[2272]: I0717 00:14:41.784637    2272 scope.go:117] "RemoveContainer" containerID="1ed3d7c18ca0fa603bf5924188b62d4404c524270e2f5297051a1fc935cc1e0f"
	Jul 17 00:14:46 addons-933500 kubelet[2272]: I0717 00:14:46.929601    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="my-volcano/test-job-nginx-0" podStartSLOduration=5.925553178 podStartE2EDuration="28.929553257s" podCreationTimestamp="2024-07-17 00:14:18 +0000 UTC" firstStartedPulling="2024-07-17 00:14:21.761412003 +0000 UTC m=+280.782606132" lastFinishedPulling="2024-07-17 00:14:44.765412082 +0000 UTC m=+303.786606211" observedRunningTime="2024-07-17 00:14:46.928369254 +0000 UTC m=+305.949563483" watchObservedRunningTime="2024-07-17 00:14:46.929553257 +0000 UTC m=+305.950747486"
	Jul 17 00:14:48 addons-933500 kubelet[2272]: I0717 00:14:48.207300    2272 scope.go:117] "RemoveContainer" containerID="703ad2dc6c08c34eba56af6bbdd78d3535e0f9de11131e473be402745d847137"
	Jul 17 00:14:48 addons-933500 kubelet[2272]: E0717 00:14:48.207845    2272 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-fwwtf_gadget(acaa1a79-284d-419b-8129-14fa06df8701)\"" pod="gadget/gadget-fwwtf" podUID="acaa1a79-284d-419b-8129-14fa06df8701"
	Jul 17 00:14:50 addons-933500 kubelet[2272]: I0717 00:14:50.061124    2272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:14:50 addons-933500 kubelet[2272]: I0717 00:14:50.140293    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/helm-test" podStartSLOduration=5.160346665 podStartE2EDuration="23.140271859s" podCreationTimestamp="2024-07-17 00:14:27 +0000 UTC" firstStartedPulling="2024-07-17 00:14:30.979332707 +0000 UTC m=+290.000526836" lastFinishedPulling="2024-07-17 00:14:48.959257801 +0000 UTC m=+307.980452030" observedRunningTime="2024-07-17 00:14:50.084305214 +0000 UTC m=+309.105499443" watchObservedRunningTime="2024-07-17 00:14:50.140271859 +0000 UTC m=+309.161466088"
	Jul 17 00:14:50 addons-933500 kubelet[2272]: E0717 00:14:50.172566    2272 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = InvalidArgument desc = tty and stderr cannot both be true" containerID="d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47"
	Jul 17 00:14:51 addons-933500 kubelet[2272]: I0717 00:14:51.174455    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=4.068818537 podStartE2EDuration="14.174434841s" podCreationTimestamp="2024-07-17 00:14:37 +0000 UTC" firstStartedPulling="2024-07-17 00:14:39.48065172 +0000 UTC m=+298.501845849" lastFinishedPulling="2024-07-17 00:14:49.586267924 +0000 UTC m=+308.607462153" observedRunningTime="2024-07-17 00:14:50.143296967 +0000 UTC m=+309.164491196" watchObservedRunningTime="2024-07-17 00:14:51.174434841 +0000 UTC m=+310.195629070"
	Jul 17 00:14:52 addons-933500 kubelet[2272]: I0717 00:14:52.568172    2272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-799qj\" (UniqueName: \"kubernetes.io/projected/d8dd905a-ac9d-480c-ab89-37c34b419db6-kube-api-access-799qj\") pod \"d8dd905a-ac9d-480c-ab89-37c34b419db6\" (UID: \"d8dd905a-ac9d-480c-ab89-37c34b419db6\") "
	Jul 17 00:14:52 addons-933500 kubelet[2272]: I0717 00:14:52.577039    2272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8dd905a-ac9d-480c-ab89-37c34b419db6-kube-api-access-799qj" (OuterVolumeSpecName: "kube-api-access-799qj") pod "d8dd905a-ac9d-480c-ab89-37c34b419db6" (UID: "d8dd905a-ac9d-480c-ab89-37c34b419db6"). InnerVolumeSpecName "kube-api-access-799qj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:14:52 addons-933500 kubelet[2272]: I0717 00:14:52.669714    2272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-799qj\" (UniqueName: \"kubernetes.io/projected/d8dd905a-ac9d-480c-ab89-37c34b419db6-kube-api-access-799qj\") on node \"addons-933500\" DevicePath \"\""
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.161508    2272 topology_manager.go:215] "Topology Admit Handler" podUID="0c737d45-ea7f-4f79-b91c-8916f7c583b9" podNamespace="kube-system" podName="helm-test"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: E0717 00:14:53.161943    2272 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8dd905a-ac9d-480c-ab89-37c34b419db6" containerName="helm-test"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.162074    2272 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dd905a-ac9d-480c-ab89-37c34b419db6" containerName="helm-test"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.257722    2272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8dd905a-ac9d-480c-ab89-37c34b419db6" path="/var/lib/kubelet/pods/d8dd905a-ac9d-480c-ab89-37c34b419db6/volumes"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.258390    2272 scope.go:117] "RemoveContainer" containerID="d177d69825f1ff74548ffac4852399a481d21011f307b366667f1b84e5415f47"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.276713    2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4wtl\" (UniqueName: \"kubernetes.io/projected/0c737d45-ea7f-4f79-b91c-8916f7c583b9-kube-api-access-r4wtl\") pod \"helm-test\" (UID: \"0c737d45-ea7f-4f79-b91c-8916f7c583b9\") " pod="kube-system/helm-test"
	Jul 17 00:14:53 addons-933500 kubelet[2272]: I0717 00:14:53.464950    2272 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:14:56 addons-933500 kubelet[2272]: I0717 00:14:56.913431    2272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4wtl\" (UniqueName: \"kubernetes.io/projected/0c737d45-ea7f-4f79-b91c-8916f7c583b9-kube-api-access-r4wtl\") pod \"0c737d45-ea7f-4f79-b91c-8916f7c583b9\" (UID: \"0c737d45-ea7f-4f79-b91c-8916f7c583b9\") "
	Jul 17 00:14:56 addons-933500 kubelet[2272]: I0717 00:14:56.922064    2272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c737d45-ea7f-4f79-b91c-8916f7c583b9-kube-api-access-r4wtl" (OuterVolumeSpecName: "kube-api-access-r4wtl") pod "0c737d45-ea7f-4f79-b91c-8916f7c583b9" (UID: "0c737d45-ea7f-4f79-b91c-8916f7c583b9"). InnerVolumeSpecName "kube-api-access-r4wtl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:14:57 addons-933500 kubelet[2272]: I0717 00:14:57.015204    2272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r4wtl\" (UniqueName: \"kubernetes.io/projected/0c737d45-ea7f-4f79-b91c-8916f7c583b9-kube-api-access-r4wtl\") on node \"addons-933500\" DevicePath \"\""
	Jul 17 00:14:57 addons-933500 kubelet[2272]: I0717 00:14:57.293244    2272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c737d45-ea7f-4f79-b91c-8916f7c583b9" path="/var/lib/kubelet/pods/0c737d45-ea7f-4f79-b91c-8916f7c583b9/volumes"
	Jul 17 00:14:57 addons-933500 kubelet[2272]: I0717 00:14:57.539697    2272 scope.go:117] "RemoveContainer" containerID="1e4a2fb1fcb876ceded595ad4b998a7cb9bef93c9ef4689f8bc9acde749bd17b"
	
	
	==> storage-provisioner [9df03d52073f] <==
	I0717 00:10:23.893354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:10:23.946157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:10:23.946230       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:10:24.005312       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:10:24.011926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-933500_032beac0-574a-4412-bafc-c2fb5a662280!
	I0717 00:10:24.029887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f16fc794-6640-47f8-ba8b-86a79e491f28", APIVersion:"v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-933500_032beac0-574a-4412-bafc-c2fb5a662280 became leader
	I0717 00:10:24.112238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-933500_032beac0-574a-4412-bafc-c2fb5a662280!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:14:48.613339    5928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-933500 -n addons-933500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-933500 -n addons-933500: (13.6155872s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-933500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-txtv2 ingress-nginx-admission-patch-bffc7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-933500 describe pod ingress-nginx-admission-create-txtv2 ingress-nginx-admission-patch-bffc7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-933500 describe pod ingress-nginx-admission-create-txtv2 ingress-nginx-admission-patch-bffc7: exit status 1 (169.0156ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-txtv2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bffc7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-933500 describe pod ingress-nginx-admission-create-txtv2 ingress-nginx-admission-patch-bffc7: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.35s)

                                                
                                    
x
+
TestErrorSpam/setup (197.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-153600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-153600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 --driver=hyperv: (3m17.0962424s)
error_spam_test.go:96: unexpected stderr: "W0716 17:19:05.128513   11592 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-153600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19265
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-153600" primary control-plane node in "nospam-153600" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-153600" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0716 17:19:05.128513   11592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (197.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (33.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-804300 -n functional-804300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-804300 -n functional-804300: (11.7194526s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 logs -n 25: (8.6117703s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:23 PDT | 16 Jul 24 17:23 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:23 PDT | 16 Jul 24 17:23 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:23 PDT | 16 Jul 24 17:23 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:23 PDT | 16 Jul 24 17:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:24 PDT | 16 Jul 24 17:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:24 PDT | 16 Jul 24 17:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-153600 --log_dir                                     | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:24 PDT | 16 Jul 24 17:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-153600                                            | nospam-153600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:25 PDT | 16 Jul 24 17:25 PDT |
	| start   | -p functional-804300                                        | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:25 PDT | 16 Jul 24 17:29 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-804300                                        | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:29 PDT | 16 Jul 24 17:31 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache add                                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:31 PDT | 16 Jul 24 17:31 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache add                                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:31 PDT | 16 Jul 24 17:31 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache add                                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:31 PDT | 16 Jul 24 17:31 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache add                                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:31 PDT | 16 Jul 24 17:32 PDT |
	|         | minikube-local-cache-test:functional-804300                 |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache delete                              | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | minikube-local-cache-test:functional-804300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	| ssh     | functional-804300 ssh sudo                                  | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-804300                                           | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-804300 ssh                                       | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-804300 cache reload                              | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	| ssh     | functional-804300 ssh                                       | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-804300 kubectl --                                | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:32 PDT | 16 Jul 24 17:32 PDT |
	|         | --context functional-804300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:29:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:29:16.975429   14768 out.go:291] Setting OutFile to fd 984 ...
	I0716 17:29:16.976516   14768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:29:16.976516   14768 out.go:304] Setting ErrFile to fd 804...
	I0716 17:29:16.976516   14768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:29:17.004353   14768 out.go:298] Setting JSON to false
	I0716 17:29:17.007535   14768 start.go:129] hostinfo: {"hostname":"minikube1","uptime":17796,"bootTime":1721158360,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:29:17.007535   14768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:29:17.012661   14768 out.go:177] * [functional-804300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:29:17.015588   14768 notify.go:220] Checking for updates...
	I0716 17:29:17.017990   14768 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:29:17.020149   14768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:29:17.023311   14768 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:29:17.027314   14768 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:29:17.030015   14768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:29:17.032988   14768 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:29:17.032988   14768 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:29:22.183326   14768 out.go:177] * Using the hyperv driver based on existing profile
	I0716 17:29:22.186753   14768 start.go:297] selected driver: hyperv
	I0716 17:29:22.186753   14768 start.go:901] validating driver "hyperv" against &{Name:functional-804300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-804300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.236 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:29:22.186753   14768 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:29:22.237286   14768 cni.go:84] Creating CNI manager for ""
	I0716 17:29:22.237366   14768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:29:22.237546   14768 start.go:340] cluster config:
	{Name:functional-804300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-804300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.236 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:29:22.237905   14768 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:29:22.244821   14768 out.go:177] * Starting "functional-804300" primary control-plane node in "functional-804300" cluster
	I0716 17:29:22.247184   14768 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:29:22.247184   14768 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:29:22.247184   14768 cache.go:56] Caching tarball of preloaded images
	I0716 17:29:22.247184   14768 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:29:22.248154   14768 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:29:22.248154   14768 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\config.json ...
	I0716 17:29:22.250562   14768 start.go:360] acquireMachinesLock for functional-804300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:29:22.250562   14768 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-804300"
	I0716 17:29:22.251442   14768 start.go:96] Skipping create...Using existing machine configuration
	I0716 17:29:22.251660   14768 fix.go:54] fixHost starting: 
	I0716 17:29:22.251964   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:24.937427   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:24.937427   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:24.937427   14768 fix.go:112] recreateIfNeeded on functional-804300: state=Running err=<nil>
	W0716 17:29:24.937427   14768 fix.go:138] unexpected machine state, will restart: <nil>
	I0716 17:29:24.942193   14768 out.go:177] * Updating the running hyperv "functional-804300" VM ...
	I0716 17:29:24.944938   14768 machine.go:94] provisionDockerMachine start ...
	I0716 17:29:24.944938   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:27.084977   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:27.084977   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:27.085109   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:29.622615   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:29.622615   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:29.629902   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:29:29.630725   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:29:29.630725   14768 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:29:29.762598   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804300
	
	I0716 17:29:29.762598   14768 buildroot.go:166] provisioning hostname "functional-804300"
	I0716 17:29:29.762598   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:31.874452   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:31.874452   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:31.874548   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:34.419195   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:34.419195   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:34.425323   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:29:34.426038   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:29:34.426038   14768 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-804300 && echo "functional-804300" | sudo tee /etc/hostname
	I0716 17:29:34.602390   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804300
	
	I0716 17:29:34.602606   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:36.816321   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:36.816321   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:36.817049   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:39.417189   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:39.417189   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:39.422533   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:29:39.423297   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:29:39.423297   14768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-804300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-804300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-804300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:29:39.550980   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:29:39.551132   14768 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:29:39.551132   14768 buildroot.go:174] setting up certificates
	I0716 17:29:39.551132   14768 provision.go:84] configureAuth start
	I0716 17:29:39.551268   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:41.726503   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:41.726503   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:41.727399   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:44.316838   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:44.316838   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:44.317685   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:46.455207   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:46.455207   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:46.455533   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:49.005780   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:49.005979   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:49.006088   14768 provision.go:143] copyHostCerts
	I0716 17:29:49.006253   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:29:49.006589   14768 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:29:49.006680   14768 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:29:49.007130   14768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:29:49.008232   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:29:49.008732   14768 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:29:49.008831   14768 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:29:49.009204   14768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:29:49.010023   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:29:49.010023   14768 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:29:49.010023   14768 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:29:49.010705   14768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:29:49.012085   14768 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-804300 san=[127.0.0.1 172.27.170.236 functional-804300 localhost minikube]
	I0716 17:29:49.243217   14768 provision.go:177] copyRemoteCerts
	I0716 17:29:49.258175   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:29:49.258175   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:51.388627   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:51.389441   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:51.389441   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:53.983344   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:53.983344   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:53.984109   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:29:54.088641   14768 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8299606s)
	I0716 17:29:54.088641   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:29:54.089208   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:29:54.143104   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:29:54.143104   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0716 17:29:54.202002   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:29:54.202120   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:29:54.251775   14768 provision.go:87] duration metric: took 14.7005025s to configureAuth
	I0716 17:29:54.251775   14768 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:29:54.252504   14768 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:29:54.252674   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:29:56.417724   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:29:56.417724   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:56.418143   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:29:59.020615   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:29:59.020615   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:29:59.026694   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:29:59.027468   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:29:59.027468   14768 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:29:59.158463   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:29:59.158463   14768 buildroot.go:70] root file system type: tmpfs
	I0716 17:29:59.158997   14768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:29:59.159157   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:01.357154   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:01.358073   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:01.358073   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:03.932210   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:03.932210   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:03.941001   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:30:03.941538   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:30:03.941538   14768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:30:04.102162   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:30:04.102280   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:06.286805   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:06.286805   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:06.287013   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:08.866641   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:08.866641   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:08.872423   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:30:08.873185   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:30:08.873185   14768 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:30:09.027142   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:30:09.027142   14768 machine.go:97] duration metric: took 44.0820236s to provisionDockerMachine
	I0716 17:30:09.027142   14768 start.go:293] postStartSetup for "functional-804300" (driver="hyperv")
	I0716 17:30:09.027142   14768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:30:09.040507   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:30:09.040507   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:11.169495   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:11.169495   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:11.170514   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:13.747468   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:13.747605   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:13.747845   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:30:13.845210   14768 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8043159s)
	I0716 17:30:13.859457   14768 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:30:13.866918   14768 command_runner.go:130] > NAME=Buildroot
	I0716 17:30:13.866918   14768 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 17:30:13.866918   14768 command_runner.go:130] > ID=buildroot
	I0716 17:30:13.866918   14768 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 17:30:13.866918   14768 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 17:30:13.866918   14768 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:30:13.866918   14768 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:30:13.867635   14768 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:30:13.868247   14768 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:30:13.868247   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:30:13.869941   14768 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4740\hosts -> hosts in /etc/test/nested/copy/4740
	I0716 17:30:13.870003   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4740\hosts -> /etc/test/nested/copy/4740/hosts
	I0716 17:30:13.882354   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4740
	I0716 17:30:13.903303   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:30:13.967321   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4740\hosts --> /etc/test/nested/copy/4740/hosts (40 bytes)
	I0716 17:30:14.017358   14768 start.go:296] duration metric: took 4.9901954s for postStartSetup
	I0716 17:30:14.017481   14768 fix.go:56] duration metric: took 51.7657211s for fixHost
	I0716 17:30:14.017605   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:16.186684   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:16.186684   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:16.187178   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:18.786174   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:18.786174   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:18.792725   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:30:18.793346   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:30:18.793346   14768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:30:18.919535   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721176218.925816159
	
	I0716 17:30:18.919535   14768 fix.go:216] guest clock: 1721176218.925816159
	I0716 17:30:18.919535   14768 fix.go:229] Guest: 2024-07-16 17:30:18.925816159 -0700 PDT Remote: 2024-07-16 17:30:14.0174815 -0700 PDT m=+57.128623201 (delta=4.908334659s)
	I0716 17:30:18.919535   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:21.099974   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:21.100637   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:21.100730   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:23.684025   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:23.684025   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:23.690062   14768 main.go:141] libmachine: Using SSH client type: native
	I0716 17:30:23.690940   14768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.236 22 <nil> <nil>}
	I0716 17:30:23.690940   14768 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721176218
	I0716 17:30:23.844222   14768 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:30:18 UTC 2024
	
	I0716 17:30:23.844222   14768 fix.go:236] clock set: Wed Jul 17 00:30:18 UTC 2024
	 (err=<nil>)
	I0716 17:30:23.844222   14768 start.go:83] releasing machines lock for "functional-804300", held for 1m1.5926094s
	I0716 17:30:23.845027   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:25.973462   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:25.974079   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:25.974149   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:28.608569   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:28.608569   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:28.608569   14768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:30:28.608569   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:28.624895   14768 ssh_runner.go:195] Run: cat /version.json
	I0716 17:30:28.624895   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:30:30.856901   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:30.857812   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:30.856901   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:30:30.857812   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:30.857812   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:30.857812   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:30:33.567509   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:33.567576   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:33.567638   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:30:33.596054   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:30:33.596054   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:30:33.596409   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:30:33.664189   14768 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 17:30:33.664425   14768 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0558353s)
	W0716 17:30:33.664425   14768 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:30:33.696458   14768 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 17:30:33.696560   14768 ssh_runner.go:235] Completed: cat /version.json: (5.0716443s)
	I0716 17:30:33.708730   14768 ssh_runner.go:195] Run: systemctl --version
	I0716 17:30:33.719450   14768 command_runner.go:130] > systemd 252 (252)
	I0716 17:30:33.719642   14768 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 17:30:33.732627   14768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:30:33.740752   14768 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 17:30:33.742292   14768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:30:33.755365   14768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:30:33.773301   14768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0716 17:30:33.773301   14768 start.go:495] detecting cgroup driver to use...
	I0716 17:30:33.773301   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:30:33.779702   14768 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:30:33.779787   14768 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:30:33.814979   14768 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 17:30:33.829262   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:30:33.863247   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:30:33.884057   14768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:30:33.899578   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:30:33.933112   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:30:33.964660   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:30:33.994678   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:30:34.023666   14768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:30:34.054277   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:30:34.092276   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:30:34.124279   14768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:30:34.159147   14768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:30:34.176655   14768 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 17:30:34.191705   14768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:30:34.220833   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:30:34.537804   14768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:30:34.565712   14768 start.go:495] detecting cgroup driver to use...
	I0716 17:30:34.576835   14768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:30:34.601678   14768 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 17:30:34.601678   14768 command_runner.go:130] > [Unit]
	I0716 17:30:34.601678   14768 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 17:30:34.601820   14768 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 17:30:34.601820   14768 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 17:30:34.601847   14768 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 17:30:34.601847   14768 command_runner.go:130] > StartLimitBurst=3
	I0716 17:30:34.601847   14768 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 17:30:34.601847   14768 command_runner.go:130] > [Service]
	I0716 17:30:34.601847   14768 command_runner.go:130] > Type=notify
	I0716 17:30:34.601847   14768 command_runner.go:130] > Restart=on-failure
	I0716 17:30:34.601847   14768 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 17:30:34.601847   14768 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 17:30:34.601924   14768 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 17:30:34.601924   14768 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 17:30:34.601924   14768 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 17:30:34.601924   14768 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 17:30:34.601924   14768 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 17:30:34.601997   14768 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 17:30:34.601997   14768 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 17:30:34.601997   14768 command_runner.go:130] > ExecStart=
	I0716 17:30:34.601997   14768 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 17:30:34.601997   14768 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 17:30:34.601997   14768 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 17:30:34.601997   14768 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 17:30:34.601997   14768 command_runner.go:130] > LimitNOFILE=infinity
	I0716 17:30:34.601997   14768 command_runner.go:130] > LimitNPROC=infinity
	I0716 17:30:34.601997   14768 command_runner.go:130] > LimitCORE=infinity
	I0716 17:30:34.601997   14768 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 17:30:34.602171   14768 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 17:30:34.602196   14768 command_runner.go:130] > TasksMax=infinity
	I0716 17:30:34.602196   14768 command_runner.go:130] > TimeoutStartSec=0
	I0716 17:30:34.602196   14768 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 17:30:34.602196   14768 command_runner.go:130] > Delegate=yes
	I0716 17:30:34.602196   14768 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 17:30:34.602196   14768 command_runner.go:130] > KillMode=process
	I0716 17:30:34.602282   14768 command_runner.go:130] > [Install]
	I0716 17:30:34.602282   14768 command_runner.go:130] > WantedBy=multi-user.target
	I0716 17:30:34.613814   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:30:34.651796   14768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:30:34.695796   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:30:34.734018   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:30:34.760969   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:30:34.798246   14768 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 17:30:34.811814   14768 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:30:34.820459   14768 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 17:30:34.833496   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:30:34.852563   14768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:30:34.898104   14768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:30:35.187861   14768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:30:35.444235   14768 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:30:35.444485   14768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:30:35.498688   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:30:35.786527   14768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:30:48.718620   14768 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9320399s)
	I0716 17:30:48.730730   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:30:48.768792   14768 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0716 17:30:48.821806   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:30:48.857363   14768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:30:49.080754   14768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:30:49.282706   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:30:49.485813   14768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:30:49.528879   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:30:49.563304   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:30:49.765462   14768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:30:49.884581   14768 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:30:49.896982   14768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:30:49.905229   14768 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 17:30:49.905229   14768 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 17:30:49.905229   14768 command_runner.go:130] > Device: 0,22	Inode: 1506        Links: 1
	I0716 17:30:49.905229   14768 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 17:30:49.905229   14768 command_runner.go:130] > Access: 2024-07-17 00:30:49.794734313 +0000
	I0716 17:30:49.905229   14768 command_runner.go:130] > Modify: 2024-07-17 00:30:49.794734313 +0000
	I0716 17:30:49.905229   14768 command_runner.go:130] > Change: 2024-07-17 00:30:49.801734035 +0000
	I0716 17:30:49.905229   14768 command_runner.go:130] >  Birth: -
	I0716 17:30:49.905229   14768 start.go:563] Will wait 60s for crictl version
	I0716 17:30:49.918047   14768 ssh_runner.go:195] Run: which crictl
	I0716 17:30:49.924482   14768 command_runner.go:130] > /usr/bin/crictl
	I0716 17:30:49.936460   14768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:30:49.984518   14768 command_runner.go:130] > Version:  0.1.0
	I0716 17:30:49.984518   14768 command_runner.go:130] > RuntimeName:  docker
	I0716 17:30:49.984518   14768 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 17:30:49.984518   14768 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 17:30:49.984518   14768 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:30:49.995732   14768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:30:50.033540   14768 command_runner.go:130] > 27.0.3
	I0716 17:30:50.042845   14768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:30:50.076389   14768 command_runner.go:130] > 27.0.3
	I0716 17:30:50.080878   14768 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:30:50.081107   14768 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:30:50.085504   14768 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:30:50.085504   14768 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:30:50.085504   14768 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:30:50.085504   14768 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:30:50.088453   14768 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:30:50.088453   14768 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:30:50.099944   14768 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:30:50.107703   14768 command_runner.go:130] > 172.27.160.1	host.minikube.internal
	I0716 17:30:50.108264   14768 kubeadm.go:883] updating cluster {Name:functional-804300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:functional-804300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.236 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:30:50.108492   14768 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:30:50.118066   14768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 17:30:50.143285   14768 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 17:30:50.143285   14768 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:30:50.143285   14768 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:30:50.143285   14768 docker.go:615] Images already preloaded, skipping extraction
	I0716 17:30:50.152964   14768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:30:50.177265   14768 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 17:30:50.178084   14768 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 17:30:50.178084   14768 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:30:50.179196   14768 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:30:50.179196   14768 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:30:50.179196   14768 kubeadm.go:934] updating node { 172.27.170.236 8441 v1.30.2 docker true true} ...
	I0716 17:30:50.179196   14768 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-804300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:functional-804300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:30:50.189662   14768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:30:50.221274   14768 command_runner.go:130] > cgroupfs
	I0716 17:30:50.221829   14768 cni.go:84] Creating CNI manager for ""
	I0716 17:30:50.221829   14768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:30:50.221829   14768 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:30:50.221829   14768 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.236 APIServerPort:8441 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-804300 NodeName:functional-804300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:30:50.222645   14768 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.236
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-804300"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:30:50.233542   14768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:30:50.250550   14768 command_runner.go:130] > kubeadm
	I0716 17:30:50.250799   14768 command_runner.go:130] > kubectl
	I0716 17:30:50.250799   14768 command_runner.go:130] > kubelet
	I0716 17:30:50.250893   14768 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:30:50.260541   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 17:30:50.278632   14768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0716 17:30:50.317531   14768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:30:50.348862   14768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0716 17:30:50.392748   14768 ssh_runner.go:195] Run: grep 172.27.170.236	control-plane.minikube.internal$ /etc/hosts
	I0716 17:30:50.399935   14768 command_runner.go:130] > 172.27.170.236	control-plane.minikube.internal
	I0716 17:30:50.412330   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:30:50.617476   14768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:30:50.641480   14768 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300 for IP: 172.27.170.236
	I0716 17:30:50.641480   14768 certs.go:194] generating shared ca certs ...
	I0716 17:30:50.641480   14768 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:30:50.641480   14768 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:30:50.642542   14768 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:30:50.642542   14768 certs.go:256] generating profile certs ...
	I0716 17:30:50.643481   14768 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.key
	I0716 17:30:50.643481   14768 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\apiserver.key.89ee30f3
	I0716 17:30:50.643481   14768 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\proxy-client.key
	I0716 17:30:50.643481   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:30:50.643481   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:30:50.644525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:30:50.645524   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:30:50.645524   14768 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:30:50.645524   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:30:50.646524   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:30:50.646524   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:30:50.646524   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:30:50.647525   14768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:30:50.647525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:30:50.647525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:30:50.647525   14768 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:30:50.648525   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:30:50.693690   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:30:50.737088   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:30:50.781556   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:30:50.838549   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:30:50.881900   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:30:50.978716   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:30:51.095709   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0716 17:30:51.158017   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:30:51.218853   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:30:51.271750   14768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:30:51.326795   14768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:30:51.379803   14768 ssh_runner.go:195] Run: openssl version
	I0716 17:30:51.389233   14768 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 17:30:51.401836   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:30:51.441395   14768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:30:51.451135   14768 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:30:51.451670   14768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:30:51.464689   14768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:30:51.473956   14768 command_runner.go:130] > 51391683
	I0716 17:30:51.486951   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:30:51.518031   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:30:51.551462   14768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:30:51.558454   14768 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:30:51.558454   14768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:30:51.570466   14768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:30:51.578463   14768 command_runner.go:130] > 3ec20f2e
	I0716 17:30:51.590452   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:30:51.626492   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:30:51.661793   14768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:30:51.671250   14768 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:30:51.671328   14768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:30:51.683739   14768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:30:51.691737   14768 command_runner.go:130] > b5213941
	I0716 17:30:51.702762   14768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:30:51.731815   14768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:30:51.740685   14768 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:30:51.740767   14768 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0716 17:30:51.740767   14768 command_runner.go:130] > Device: 8,1	Inode: 7337298     Links: 1
	I0716 17:30:51.740767   14768 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 17:30:51.740767   14768 command_runner.go:130] > Access: 2024-07-17 00:28:07.678202802 +0000
	I0716 17:30:51.740767   14768 command_runner.go:130] > Modify: 2024-07-17 00:28:07.678202802 +0000
	I0716 17:30:51.740860   14768 command_runner.go:130] > Change: 2024-07-17 00:28:07.678202802 +0000
	I0716 17:30:51.740860   14768 command_runner.go:130] >  Birth: 2024-07-17 00:28:07.678202802 +0000
	I0716 17:30:51.754074   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0716 17:30:51.766685   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.778698   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0716 17:30:51.788697   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.801908   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0716 17:30:51.812901   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.825914   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0716 17:30:51.835892   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.848993   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0716 17:30:51.861126   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.874182   14768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0716 17:30:51.886949   14768 command_runner.go:130] > Certificate will not expire
	I0716 17:30:51.887415   14768 kubeadm.go:392] StartCluster: {Name:functional-804300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:functional-804300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.236 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:30:51.896276   14768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:30:51.988753   14768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:30:52.011779   14768 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0716 17:30:52.011779   14768 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0716 17:30:52.011867   14768 command_runner.go:130] > /var/lib/minikube/etcd:
	I0716 17:30:52.011867   14768 command_runner.go:130] > member
	I0716 17:30:52.017029   14768 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0716 17:30:52.017113   14768 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0716 17:30:52.028781   14768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0716 17:30:52.055556   14768 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0716 17:30:52.055925   14768 kubeconfig.go:125] found "functional-804300" server: "https://172.27.170.236:8441"
	I0716 17:30:52.057856   14768 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:30:52.058851   14768 kapi.go:59] client config for functional-804300: &rest.Config{Host:"https://172.27.170.236:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-804300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-804300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:30:52.059854   14768 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:30:52.072895   14768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0716 17:30:52.096873   14768 kubeadm.go:630] The running cluster does not require reconfiguration: 172.27.170.236
	I0716 17:30:52.096873   14768 kubeadm.go:1160] stopping kube-system containers ...
	I0716 17:30:52.104872   14768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:30:52.163956   14768 command_runner.go:130] > cff7061e1bed
	I0716 17:30:52.163956   14768 command_runner.go:130] > e7e74e4598e2
	I0716 17:30:52.163956   14768 command_runner.go:130] > ce84b5baad40
	I0716 17:30:52.163956   14768 command_runner.go:130] > cdb08843a6a3
	I0716 17:30:52.163956   14768 command_runner.go:130] > 72ad6f86d4f3
	I0716 17:30:52.163956   14768 command_runner.go:130] > 390ac1cbe2d6
	I0716 17:30:52.163956   14768 command_runner.go:130] > caf1d2413695
	I0716 17:30:52.163956   14768 command_runner.go:130] > 9331f27aaf34
	I0716 17:30:52.163956   14768 command_runner.go:130] > 25958cbb8fb6
	I0716 17:30:52.163956   14768 command_runner.go:130] > 6ff413830127
	I0716 17:30:52.163956   14768 command_runner.go:130] > 0a4982ea44de
	I0716 17:30:52.163956   14768 command_runner.go:130] > dd6946e9d4e1
	I0716 17:30:52.164208   14768 command_runner.go:130] > a4c4a1522a52
	I0716 17:30:52.164208   14768 command_runner.go:130] > b83305d7883c
	I0716 17:30:52.164229   14768 command_runner.go:130] > 2ecc0e364eda
	I0716 17:30:52.164301   14768 command_runner.go:130] > ab95236e1ebb
	I0716 17:30:52.164301   14768 command_runner.go:130] > 409fc97b3ffe
	I0716 17:30:52.164301   14768 command_runner.go:130] > ef0d14592aa1
	I0716 17:30:52.164393   14768 command_runner.go:130] > f6d0ec45c354
	I0716 17:30:52.164393   14768 command_runner.go:130] > bc738a043e52
	I0716 17:30:52.164393   14768 command_runner.go:130] > 252a60b53f9e
	I0716 17:30:52.164393   14768 command_runner.go:130] > db9f21d7a7ce
	I0716 17:30:52.164393   14768 command_runner.go:130] > 681aa81d3f1e
	I0716 17:30:52.164514   14768 docker.go:483] Stopping containers: [cff7061e1bed e7e74e4598e2 ce84b5baad40 cdb08843a6a3 72ad6f86d4f3 390ac1cbe2d6 caf1d2413695 9331f27aaf34 25958cbb8fb6 6ff413830127 0a4982ea44de dd6946e9d4e1 a4c4a1522a52 b83305d7883c 2ecc0e364eda ab95236e1ebb 409fc97b3ffe ef0d14592aa1 f6d0ec45c354 bc738a043e52 252a60b53f9e db9f21d7a7ce 681aa81d3f1e]
	I0716 17:30:52.173490   14768 ssh_runner.go:195] Run: docker stop cff7061e1bed e7e74e4598e2 ce84b5baad40 cdb08843a6a3 72ad6f86d4f3 390ac1cbe2d6 caf1d2413695 9331f27aaf34 25958cbb8fb6 6ff413830127 0a4982ea44de dd6946e9d4e1 a4c4a1522a52 b83305d7883c 2ecc0e364eda ab95236e1ebb 409fc97b3ffe ef0d14592aa1 f6d0ec45c354 bc738a043e52 252a60b53f9e db9f21d7a7ce 681aa81d3f1e
	I0716 17:30:52.994185   14768 command_runner.go:130] > cff7061e1bed
	I0716 17:30:52.994185   14768 command_runner.go:130] > e7e74e4598e2
	I0716 17:30:52.994185   14768 command_runner.go:130] > ce84b5baad40
	I0716 17:30:52.994185   14768 command_runner.go:130] > cdb08843a6a3
	I0716 17:30:52.994185   14768 command_runner.go:130] > 72ad6f86d4f3
	I0716 17:30:52.994185   14768 command_runner.go:130] > 390ac1cbe2d6
	I0716 17:30:52.994185   14768 command_runner.go:130] > caf1d2413695
	I0716 17:30:52.994185   14768 command_runner.go:130] > 9331f27aaf34
	I0716 17:30:52.994185   14768 command_runner.go:130] > 25958cbb8fb6
	I0716 17:30:52.994185   14768 command_runner.go:130] > 6ff413830127
	I0716 17:30:52.994185   14768 command_runner.go:130] > 0a4982ea44de
	I0716 17:30:52.994185   14768 command_runner.go:130] > dd6946e9d4e1
	I0716 17:30:52.994185   14768 command_runner.go:130] > a4c4a1522a52
	I0716 17:30:52.994185   14768 command_runner.go:130] > b83305d7883c
	I0716 17:30:52.994185   14768 command_runner.go:130] > 2ecc0e364eda
	I0716 17:30:52.994185   14768 command_runner.go:130] > ab95236e1ebb
	I0716 17:30:52.994185   14768 command_runner.go:130] > 409fc97b3ffe
	I0716 17:30:52.994185   14768 command_runner.go:130] > ef0d14592aa1
	I0716 17:30:52.994185   14768 command_runner.go:130] > f6d0ec45c354
	I0716 17:30:52.994185   14768 command_runner.go:130] > bc738a043e52
	I0716 17:30:52.994185   14768 command_runner.go:130] > 252a60b53f9e
	I0716 17:30:52.994185   14768 command_runner.go:130] > db9f21d7a7ce
	I0716 17:30:52.994185   14768 command_runner.go:130] > 681aa81d3f1e
	I0716 17:30:53.005790   14768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0716 17:30:53.080126   14768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:30:53.099356   14768 command_runner.go:130] > -rw------- 1 root root 5651 Jul 17 00:28 /etc/kubernetes/admin.conf
	I0716 17:30:53.099356   14768 command_runner.go:130] > -rw------- 1 root root 5654 Jul 17 00:28 /etc/kubernetes/controller-manager.conf
	I0716 17:30:53.099356   14768 command_runner.go:130] > -rw------- 1 root root 2007 Jul 17 00:28 /etc/kubernetes/kubelet.conf
	I0716 17:30:53.099356   14768 command_runner.go:130] > -rw------- 1 root root 5606 Jul 17 00:28 /etc/kubernetes/scheduler.conf
	I0716 17:30:53.099356   14768 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 17 00:28 /etc/kubernetes/scheduler.conf
	
	I0716 17:30:53.114179   14768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0716 17:30:53.131747   14768 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0716 17:30:53.143748   14768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0716 17:30:53.160557   14768 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0716 17:30:53.172588   14768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0716 17:30:53.187972   14768 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0716 17:30:53.199239   14768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:30:53.228098   14768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0716 17:30:53.244069   14768 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0716 17:30:53.256136   14768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:30:53.284130   14768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:30:53.303154   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0716 17:30:53.387898   14768 command_runner.go:130] > [certs] Using the existing "sa" key
	I0716 17:30:53.387898   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:30:54.510534   14768 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:30:54.510534   14768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1226313s)
	I0716 17:30:54.510534   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:30:54.791367   14768 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:30:54.791367   14768 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:30:54.791367   14768 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 17:30:54.791367   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:30:54.866661   14768 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:30:54.866661   14768 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:30:54.866661   14768 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:30:54.866789   14768 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:30:54.866789   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:30:54.994903   14768 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:30:54.995033   14768 api_server.go:52] waiting for apiserver process to appear ...
	I0716 17:30:55.006692   14768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:30:55.512334   14768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:30:56.018026   14768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:30:56.515221   14768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:30:56.542207   14768 command_runner.go:130] > 5881
	I0716 17:30:56.542953   14768 api_server.go:72] duration metric: took 1.5480439s to wait for apiserver process to appear ...
	I0716 17:30:56.543049   14768 api_server.go:88] waiting for apiserver healthz status ...
	I0716 17:30:56.543049   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:30:59.990366   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0716 17:30:59.990905   14768 api_server.go:103] status: https://172.27.170.236:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0716 17:30:59.991115   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:00.044398   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0716 17:31:00.044514   14768 api_server.go:103] status: https://172.27.170.236:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0716 17:31:00.044566   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:00.063950   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0716 17:31:00.063950   14768 api_server.go:103] status: https://172.27.170.236:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0716 17:31:00.546507   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:00.555974   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0716 17:31:00.555974   14768 api_server.go:103] status: https://172.27.170.236:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0716 17:31:01.049863   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:01.059288   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0716 17:31:01.059288   14768 api_server.go:103] status: https://172.27.170.236:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0716 17:31:01.544139   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:01.553847   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 200:
	ok
	I0716 17:31:01.553847   14768 round_trippers.go:463] GET https://172.27.170.236:8441/version
	I0716 17:31:01.553847   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:01.553847   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:01.553847   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:01.573568   14768 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0716 17:31:01.573646   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:01.573646   14768 round_trippers.go:580]     Audit-Id: f0afb1b5-43ec-44d0-9c38-e167f7c8b7c1
	I0716 17:31:01.573646   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:01.573646   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:01.573646   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:01.573646   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:01.573748   14768 round_trippers.go:580]     Content-Length: 263
	I0716 17:31:01.573784   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:01 GMT
	I0716 17:31:01.573784   14768 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 17:31:01.573988   14768 api_server.go:141] control plane version: v1.30.2
	I0716 17:31:01.574022   14768 api_server.go:131] duration metric: took 5.030952s to wait for apiserver health ...
	I0716 17:31:01.574093   14768 cni.go:84] Creating CNI manager for ""
	I0716 17:31:01.574093   14768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:31:01.577644   14768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0716 17:31:01.592886   14768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0716 17:31:01.615594   14768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0716 17:31:01.645854   14768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 17:31:01.645854   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:01.645854   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:01.645854   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:01.645854   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:01.658877   14768 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0716 17:31:01.658877   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:01.658877   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:01.658877   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:01.658877   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:01 GMT
	I0716 17:31:01.658877   14768 round_trippers.go:580]     Audit-Id: 4cc4d101-22cb-4a23-8bb4-6e60bc3dc09c
	I0716 17:31:01.658877   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:01.658877   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:01.659850   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"590","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52041 chars]
	I0716 17:31:01.665908   14768 system_pods.go:59] 7 kube-system pods found
	I0716 17:31:01.665908   14768 system_pods.go:61] "coredns-7db6d8ff4d-z9r2k" [ba79f306-2c4d-4ee6-8622-1d2967c40c34] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0716 17:31:01.665908   14768 system_pods.go:61] "etcd-functional-804300" [972afb37-99e9-4387-b6a4-2c6d708a3bfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0716 17:31:01.665908   14768 system_pods.go:61] "kube-apiserver-functional-804300" [3c09f919-6bd7-4bfe-928c-c394ae02b434] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0716 17:31:01.665908   14768 system_pods.go:61] "kube-controller-manager-functional-804300" [5471a5a2-6d9a-4eff-98f0-3f94d40f7749] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0716 17:31:01.665908   14768 system_pods.go:61] "kube-proxy-4r9g4" [693e7731-f132-4980-84c0-f0df321e1012] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0716 17:31:01.665908   14768 system_pods.go:61] "kube-scheduler-functional-804300" [5923bbf2-211a-4508-b912-bab732c092b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0716 17:31:01.665908   14768 system_pods.go:61] "storage-provisioner" [c846d719-af54-492f-8e1a-b4bb2a912d7f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0716 17:31:01.665908   14768 system_pods.go:74] duration metric: took 20.0542ms to wait for pod list to return data ...
	I0716 17:31:01.665908   14768 node_conditions.go:102] verifying NodePressure condition ...
	I0716 17:31:01.665908   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes
	I0716 17:31:01.665908   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:01.665908   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:01.665908   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:01.669852   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:01.669852   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:01.669852   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:01.669852   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:01.669852   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:01.669852   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:01 GMT
	I0716 17:31:01.670360   14768 round_trippers.go:580]     Audit-Id: 008a606a-cced-4cbe-8d4a-80cfd5fe8ef1
	I0716 17:31:01.670360   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:01.672217   14768 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0716 17:31:01.673107   14768 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 17:31:01.673107   14768 node_conditions.go:123] node cpu capacity is 2
	I0716 17:31:01.673107   14768 node_conditions.go:105] duration metric: took 7.1984ms to run NodePressure ...
	I0716 17:31:01.673107   14768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0716 17:31:02.252032   14768 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 17:31:02.252032   14768 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 17:31:02.252032   14768 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0716 17:31:02.252032   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0716 17:31:02.253036   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.253036   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.253036   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.267030   14768 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0716 17:31:02.267030   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.267030   14768 round_trippers.go:580]     Audit-Id: c0d3b90b-bd65-454f-8bcb-0f144dc09b99
	I0716 17:31:02.268025   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.268025   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.268025   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.268025   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.268025   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.268900   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31181 chars]
	I0716 17:31:02.275097   14768 kubeadm.go:739] kubelet initialised
	I0716 17:31:02.275097   14768 kubeadm.go:740] duration metric: took 23.0656ms waiting for restarted kubelet to initialise ...
	I0716 17:31:02.275097   14768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:31:02.275097   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:02.275097   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.275097   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.275097   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.297075   14768 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0716 17:31:02.297075   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.297075   14768 round_trippers.go:580]     Audit-Id: 29cf2f30-36ad-49db-ba63-a030b62be142
	I0716 17:31:02.297075   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.297075   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.297075   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.297075   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.297075   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.299082   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"590","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52041 chars]
	I0716 17:31:02.301083   14768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:02.301083   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:02.301083   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.301083   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.301083   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.307083   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:02.307083   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.307083   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.307083   14768 round_trippers.go:580]     Audit-Id: bf42a6ef-68a6-492a-b8d6-e43f9ca2ada1
	I0716 17:31:02.307083   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.307083   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.307083   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.307083   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.327116   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"590","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0716 17:31:02.328110   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:02.328181   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.328181   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.328181   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.333079   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:02.334123   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.334204   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.334204   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.334238   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.334238   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.334238   14768 round_trippers.go:580]     Audit-Id: f1413072-ef44-4a61-bc11-048784cbd3a2
	I0716 17:31:02.334238   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.334530   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:02.814272   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:02.814272   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.814272   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.814272   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.819309   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:02.819387   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.819387   14768 round_trippers.go:580]     Audit-Id: 60eda9cd-9284-4bdb-9e26-ae0b6f28ae65
	I0716 17:31:02.819387   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.819387   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.819453   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.819453   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.819453   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.822718   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"590","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0716 17:31:02.823385   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:02.823385   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:02.823385   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:02.823385   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:02.825948   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:02.825948   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:02.825948   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:02.825948   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:02.825948   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:02.825948   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:02 GMT
	I0716 17:31:02.825948   14768 round_trippers.go:580]     Audit-Id: e7df8a26-bcee-437a-a366-5357acf70104
	I0716 17:31:02.825948   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:02.826853   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:03.302716   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:03.302896   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:03.302896   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:03.302896   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:03.308095   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:03.308095   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:03.308095   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:03.308095   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:03.308095   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:03.308095   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:03 GMT
	I0716 17:31:03.308095   14768 round_trippers.go:580]     Audit-Id: 1bd194c3-3865-43fc-a660-33b4a02006e1
	I0716 17:31:03.308095   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:03.308095   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:03.309185   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:03.309185   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:03.309279   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:03.309279   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:03.311000   14768 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 17:31:03.312032   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:03.312032   14768 round_trippers.go:580]     Audit-Id: e337d847-a75c-48d8-98be-65403f1153c3
	I0716 17:31:03.312032   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:03.312032   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:03.312032   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:03.312032   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:03.312116   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:03 GMT
	I0716 17:31:03.312349   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:03.816849   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:03.816849   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:03.816934   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:03.816934   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:03.820409   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:03.820409   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:03.820409   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:03.820409   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:03.820485   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:03.820485   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:03 GMT
	I0716 17:31:03.820485   14768 round_trippers.go:580]     Audit-Id: 3ffdeb92-231c-4b7f-ac22-fd8a866d1250
	I0716 17:31:03.820485   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:03.821292   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:03.822441   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:03.822441   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:03.822441   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:03.822441   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:03.825840   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:03.825928   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:03.825928   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:03.826728   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:03.827259   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:03.827259   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:03.827259   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:03 GMT
	I0716 17:31:03.827259   14768 round_trippers.go:580]     Audit-Id: e150aafe-e210-455d-8cb3-d3412c9c1401
	I0716 17:31:03.827750   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:04.302702   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:04.302702   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:04.302702   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:04.302702   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:04.306278   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:04.306278   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:04.307241   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:04.307241   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:04.307241   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:04.307277   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:04.307277   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:04 GMT
	I0716 17:31:04.307277   14768 round_trippers.go:580]     Audit-Id: c431f1c7-bf27-4a70-96b5-55a40923d418
	I0716 17:31:04.307529   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:04.308308   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:04.308308   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:04.308308   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:04.308308   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:04.313288   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:04.313288   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:04.313288   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:04 GMT
	I0716 17:31:04.313288   14768 round_trippers.go:580]     Audit-Id: 113f50b5-c687-4637-8332-e5dee160dfb3
	I0716 17:31:04.313288   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:04.313866   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:04.313866   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:04.313866   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:04.313960   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:04.314462   14768 pod_ready.go:102] pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace has status "Ready":"False"
	I0716 17:31:04.804710   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:04.804710   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:04.804710   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:04.804710   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:04.809366   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:04.809668   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:04.809723   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:04.809723   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:04 GMT
	I0716 17:31:04.809723   14768 round_trippers.go:580]     Audit-Id: 84c4a763-8c9f-496b-b0fa-590c7d2be30c
	I0716 17:31:04.809723   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:04.809723   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:04.809723   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:04.809723   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:04.810392   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:04.810960   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:04.810960   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:04.810960   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:04.813321   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:04.814276   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:04.814276   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:04 GMT
	I0716 17:31:04.814276   14768 round_trippers.go:580]     Audit-Id: c9a7307d-22f5-45e3-90c9-855170f10572
	I0716 17:31:04.814276   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:04.814276   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:04.814276   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:04.814276   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:04.814634   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:05.305606   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:05.305606   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:05.305606   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:05.305606   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:05.310183   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:05.310183   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:05.310183   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:05.310484   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:05.310484   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:05 GMT
	I0716 17:31:05.310484   14768 round_trippers.go:580]     Audit-Id: 1dc1e9c6-72d0-40d7-bdc8-739a2d166c27
	I0716 17:31:05.310484   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:05.310484   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:05.310847   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:05.311077   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:05.311077   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:05.311077   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:05.311628   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:05.318341   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:05.318520   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:05.318520   14768 round_trippers.go:580]     Audit-Id: 01175442-17bc-4311-9d44-29423cab2798
	I0716 17:31:05.318562   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:05.318562   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:05.318562   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:05.318562   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:05.318562   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:05 GMT
	I0716 17:31:05.318562   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:05.805369   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:05.805369   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:05.805369   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:05.805369   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:05.808852   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:05.809792   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:05.809792   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:05.809792   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:05.809831   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:05.809831   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:05 GMT
	I0716 17:31:05.809831   14768 round_trippers.go:580]     Audit-Id: c207c35e-d171-4fd2-ac95-04f18e15c741
	I0716 17:31:05.809831   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:05.810055   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:05.810777   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:05.810777   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:05.810777   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:05.810777   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:05.813098   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:05.814011   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:05.814011   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:05 GMT
	I0716 17:31:05.814011   14768 round_trippers.go:580]     Audit-Id: 78de71f9-f100-4167-8707-98e9b287df33
	I0716 17:31:05.814011   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:05.814011   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:05.814011   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:05.814011   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:05.814129   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:06.305102   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:06.305208   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.305208   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.305208   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.309752   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:06.310605   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.310605   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.310605   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.310605   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.310605   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.310605   14768 round_trippers.go:580]     Audit-Id: 9db3f361-5833-47b6-af89-3ac74a5d772a
	I0716 17:31:06.310605   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.310935   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"611","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0716 17:31:06.311788   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:06.311846   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.311846   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.311846   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.313998   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:06.313998   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.314575   14768 round_trippers.go:580]     Audit-Id: 9d97ff8e-746a-4308-a77b-6c7239534db5
	I0716 17:31:06.314575   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.314575   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.314575   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.314575   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.314575   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.315015   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:06.315574   14768 pod_ready.go:102] pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace has status "Ready":"False"
	I0716 17:31:06.806734   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:06.806734   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.806734   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.806734   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.810346   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:06.811194   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.811194   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.811194   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.811194   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.811194   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.811194   14768 round_trippers.go:580]     Audit-Id: 02781897-f845-48da-9269-bcf859a6e14c
	I0716 17:31:06.811194   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.811194   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"614","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0716 17:31:06.812013   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:06.812625   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.812625   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.812625   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.816318   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:06.816318   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.816318   14768 round_trippers.go:580]     Audit-Id: e9c58d5e-c9d8-491e-b0c1-2dd7f092299e
	I0716 17:31:06.816318   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.816318   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.816318   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.816318   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.816318   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.816318   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:06.817096   14768 pod_ready.go:92] pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:06.817096   14768 pod_ready.go:81] duration metric: took 4.5159947s for pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:06.817096   14768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:06.817224   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:06.817315   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.817315   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.817315   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.819901   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:06.819901   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.819901   14768 round_trippers.go:580]     Audit-Id: ef6b567a-877a-4ff1-86ef-12089459422c
	I0716 17:31:06.819901   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.819901   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.819901   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.819901   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.820792   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.821021   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:06.821519   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:06.821615   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:06.821715   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:06.821738   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:06.824345   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:06.824345   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:06.824345   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:06.824345   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:06.824684   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:06 GMT
	I0716 17:31:06.824684   14768 round_trippers.go:580]     Audit-Id: 8e024d3e-e0c9-4e64-8d76-db1869de331b
	I0716 17:31:06.824684   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:06.824684   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:06.824851   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:07.320926   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:07.320926   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:07.320926   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:07.320926   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:07.325514   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:07.326157   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:07.326157   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:07.326157   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:07.326157   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:07 GMT
	I0716 17:31:07.326157   14768 round_trippers.go:580]     Audit-Id: 252c2b25-b7d5-4be4-b13e-8054ad5745eb
	I0716 17:31:07.326157   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:07.326220   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:07.326220   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:07.327210   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:07.327210   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:07.327262   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:07.327262   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:07.330120   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:07.330120   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:07.330120   14768 round_trippers.go:580]     Audit-Id: 99d56e75-71a7-4a0c-b2f1-97715a8da884
	I0716 17:31:07.330120   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:07.330120   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:07.330120   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:07.330120   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:07.330120   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:07 GMT
	I0716 17:31:07.330120   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:07.819036   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:07.819212   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:07.819212   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:07.819212   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:07.823874   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:07.823874   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:07.823874   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:07.823874   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:07.823874   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:07 GMT
	I0716 17:31:07.823874   14768 round_trippers.go:580]     Audit-Id: 995ac73b-5d80-407f-9790-fe2974444d78
	I0716 17:31:07.823874   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:07.823874   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:07.825004   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:07.825582   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:07.825582   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:07.825582   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:07.825582   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:07.828450   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:07.828450   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:07.828450   14768 round_trippers.go:580]     Audit-Id: 565f5e38-08f1-41ce-b09e-0405f7015c04
	I0716 17:31:07.828450   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:07.828450   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:07.828450   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:07.828682   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:07.828682   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:07 GMT
	I0716 17:31:07.828903   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:08.318148   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:08.318148   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:08.318148   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:08.318148   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:08.323617   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:08.323617   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:08.323617   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:08 GMT
	I0716 17:31:08.323617   14768 round_trippers.go:580]     Audit-Id: ca508790-dcd1-408b-9630-9c685414558e
	I0716 17:31:08.323617   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:08.323617   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:08.323617   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:08.323617   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:08.323617   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:08.324813   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:08.324813   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:08.324813   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:08.324813   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:08.327192   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:08.327192   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:08.327192   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:08.327192   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:08.327192   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:08.328187   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:08.328187   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:08 GMT
	I0716 17:31:08.328211   14768 round_trippers.go:580]     Audit-Id: 4791e95d-3d27-4f14-97af-55a5c2a78708
	I0716 17:31:08.328388   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:08.817772   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:08.817772   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:08.818040   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:08.818040   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:08.827210   14768 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0716 17:31:08.827210   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:08.827210   14768 round_trippers.go:580]     Audit-Id: 7823c811-55a7-4932-97aa-5b369da3a7f9
	I0716 17:31:08.827210   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:08.827210   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:08.827210   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:08.827210   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:08.827210   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:08 GMT
	I0716 17:31:08.827210   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:08.827210   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:08.827210   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:08.827210   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:08.827210   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:08.831164   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:08.831164   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:08.831434   14768 round_trippers.go:580]     Audit-Id: 76d50a6f-4856-463c-b39c-540e38aa0991
	I0716 17:31:08.831434   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:08.831434   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:08.831434   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:08.831434   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:08.831434   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:08 GMT
	I0716 17:31:08.831575   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:08.832095   14768 pod_ready.go:102] pod "etcd-functional-804300" in "kube-system" namespace has status "Ready":"False"
	I0716 17:31:09.320353   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:09.320353   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:09.320353   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:09.320353   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:09.325064   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:09.325616   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:09.325616   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:09 GMT
	I0716 17:31:09.325616   14768 round_trippers.go:580]     Audit-Id: 17761a37-2ab0-49bf-a820-62ba67652266
	I0716 17:31:09.325616   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:09.325690   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:09.325690   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:09.325731   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:09.325926   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:09.326216   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:09.326816   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:09.326816   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:09.326816   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:09.329027   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:09.329027   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:09.329027   14768 round_trippers.go:580]     Audit-Id: 02bcf6de-0a4c-4622-b6fe-293eb586828c
	I0716 17:31:09.329027   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:09.329027   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:09.329027   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:09.329027   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:09.329027   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:09 GMT
	I0716 17:31:09.330025   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:09.820790   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:09.820790   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:09.820790   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:09.820790   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:09.827090   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:09.827090   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:09.827090   14768 round_trippers.go:580]     Audit-Id: 7bfca2d1-be80-438d-90ab-b2d741c27b7f
	I0716 17:31:09.827090   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:09.827090   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:09.827090   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:09.827090   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:09.827090   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:09 GMT
	I0716 17:31:09.827090   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:09.827901   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:09.827901   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:09.827901   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:09.827901   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:09.831347   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:09.831347   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:09.831347   14768 round_trippers.go:580]     Audit-Id: 1ddf4017-1031-496e-8d80-522c540f584b
	I0716 17:31:09.831347   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:09.831347   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:09.831434   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:09.831434   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:09.831434   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:09 GMT
	I0716 17:31:09.832193   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:10.320710   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:10.320861   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:10.320861   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:10.320861   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:10.325468   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:10.325468   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:10.325468   14768 round_trippers.go:580]     Audit-Id: eae71c71-5af3-4686-8a58-f3288da2a6c9
	I0716 17:31:10.325568   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:10.325568   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:10.325568   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:10.325568   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:10.325568   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:10 GMT
	I0716 17:31:10.325761   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:10.326529   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:10.326617   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:10.326617   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:10.326617   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:10.332935   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:10.333161   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:10.333161   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:10.333161   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:10.333161   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:10.333245   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:10 GMT
	I0716 17:31:10.333245   14768 round_trippers.go:580]     Audit-Id: 8643668e-8cf3-4614-ab51-cce633f1c48f
	I0716 17:31:10.333267   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:10.334964   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:10.823569   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:10.823775   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:10.823775   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:10.823775   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:10.830496   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:10.830496   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:10.830496   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:10.830496   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:10 GMT
	I0716 17:31:10.830496   14768 round_trippers.go:580]     Audit-Id: 7e8a6a01-0930-4682-a92a-a76ddf2f71cb
	I0716 17:31:10.830496   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:10.830496   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:10.830496   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:10.830496   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:10.832322   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:10.832322   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:10.832322   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:10.832322   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:10.835974   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:10.835974   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:10.835974   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:10.835974   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:10 GMT
	I0716 17:31:10.835974   14768 round_trippers.go:580]     Audit-Id: dc16365a-fb10-4e7e-b3d2-ab1a32d920a0
	I0716 17:31:10.835974   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:10.835974   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:10.835974   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:10.835974   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:10.835974   14768 pod_ready.go:102] pod "etcd-functional-804300" in "kube-system" namespace has status "Ready":"False"
	I0716 17:31:11.324993   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:11.325094   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:11.325094   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:11.325094   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:11.328857   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:11.329242   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:11.329242   14768 round_trippers.go:580]     Audit-Id: 4e260c53-96f8-48f6-ad43-7b5d15219793
	I0716 17:31:11.329242   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:11.329242   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:11.329242   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:11.329242   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:11.329242   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:11 GMT
	I0716 17:31:11.329525   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:11.330351   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:11.330351   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:11.330455   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:11.330455   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:11.332867   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:11.333619   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:11.333619   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:11.333739   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:11.333739   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:11.333739   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:11 GMT
	I0716 17:31:11.333739   14768 round_trippers.go:580]     Audit-Id: c78505ec-eb4f-4637-bdad-1b55a1b045ec
	I0716 17:31:11.333739   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:11.334109   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:11.826905   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:11.826905   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:11.826905   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:11.826905   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:11.835448   14768 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 17:31:11.835448   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:11.835448   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:11.835448   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:11 GMT
	I0716 17:31:11.835448   14768 round_trippers.go:580]     Audit-Id: 02dc8639-df1c-44d3-8c1a-5bba673d4eee
	I0716 17:31:11.835448   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:11.835448   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:11.835448   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:11.835448   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:11.835448   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:11.835448   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:11.836782   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:11.836782   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:11.840289   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:11.840289   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:11.840289   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:11.840289   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:11.840289   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:11.840289   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:11.840289   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:11 GMT
	I0716 17:31:11.840289   14768 round_trippers.go:580]     Audit-Id: 0219c6cb-c95c-4dbc-a557-0d38dff3016c
	I0716 17:31:11.840289   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:12.326642   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:12.326679   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:12.326679   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:12.326742   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:12.329870   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:12.330908   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:12.330908   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:12.330963   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:12.330963   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:12.330963   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:12 GMT
	I0716 17:31:12.330963   14768 round_trippers.go:580]     Audit-Id: 1f259ba4-3d38-4161-a2ab-4fd739ff7c6d
	I0716 17:31:12.330963   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:12.331222   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:12.332264   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:12.332264   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:12.332328   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:12.332328   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:12.335537   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:12.335537   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:12.335537   14768 round_trippers.go:580]     Audit-Id: dc7293c5-c70a-4036-933a-2a0cefc6b315
	I0716 17:31:12.335537   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:12.335537   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:12.335537   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:12.335537   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:12.335537   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:12 GMT
	I0716 17:31:12.335537   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:12.830826   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:12.830826   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:12.830826   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:12.830826   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:12.833427   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:12.834220   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:12.834220   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:12.834220   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:12.834220   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:12.834220   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:12 GMT
	I0716 17:31:12.834220   14768 round_trippers.go:580]     Audit-Id: 5f9fa53a-7225-4954-9278-9873655504e8
	I0716 17:31:12.834220   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:12.834451   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:12.835583   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:12.835655   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:12.835655   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:12.835655   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:12.837430   14768 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 17:31:12.838327   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:12.838327   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:12.838327   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:12.838327   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:12.838327   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:12.838327   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:12 GMT
	I0716 17:31:12.838327   14768 round_trippers.go:580]     Audit-Id: 2d2d4494-7dc4-4c6b-9174-ca5130ca82c3
	I0716 17:31:12.838588   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:12.839064   14768 pod_ready.go:102] pod "etcd-functional-804300" in "kube-system" namespace has status "Ready":"False"
	I0716 17:31:13.323828   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:13.323828   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.323828   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.323828   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.328442   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:13.328984   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.328984   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.328984   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.328984   14768 round_trippers.go:580]     Audit-Id: 0b4f9f96-5e55-4fd5-abf6-378cf8bc9ad1
	I0716 17:31:13.328984   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.328984   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.328984   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.329267   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"586","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6411 chars]
	I0716 17:31:13.330095   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.330095   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.330095   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.330095   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.332446   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:13.332446   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.332446   14768 round_trippers.go:580]     Audit-Id: c007a090-9933-45f8-b4e5-a704d57cb41c
	I0716 17:31:13.332446   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.333282   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.333282   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.333282   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.333282   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.333833   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:13.824957   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:13.824957   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.824957   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.825063   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.827679   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:13.827679   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.827679   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.827679   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.827679   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.827679   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.828298   14768 round_trippers.go:580]     Audit-Id: 1d23aca6-ff92-41d3-a22d-4c43b06d2924
	I0716 17:31:13.828298   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.828727   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"628","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6187 chars]
	I0716 17:31:13.829366   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.829427   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.829427   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.829427   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.832020   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:13.832020   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.832020   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.832020   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.832020   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.832020   14768 round_trippers.go:580]     Audit-Id: da5f7525-9674-4eee-9ce3-fcd70ab33d11
	I0716 17:31:13.832020   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.832020   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.832690   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:13.833789   14768 pod_ready.go:92] pod "etcd-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:13.833789   14768 pod_ready.go:81] duration metric: took 7.0165358s for pod "etcd-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.833789   14768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.833789   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804300
	I0716 17:31:13.833789   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.833789   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.833789   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.837742   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:13.838656   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.838656   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.838656   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.838656   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.838656   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.838656   14768 round_trippers.go:580]     Audit-Id: 5b55847c-21eb-4ae6-8a6c-875f02e42be3
	I0716 17:31:13.838656   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.838790   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-804300","namespace":"kube-system","uid":"3c09f919-6bd7-4bfe-928c-c394ae02b434","resourceVersion":"620","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.236:8441","kubernetes.io/config.hash":"f9b58d0560780a14b304507bf1dc73fe","kubernetes.io/config.mirror":"f9b58d0560780a14b304507bf1dc73fe","kubernetes.io/config.seen":"2024-07-17T00:28:12.431716628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0716 17:31:13.839143   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.839601   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.839601   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.839601   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.841854   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:13.841854   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.842321   14768 round_trippers.go:580]     Audit-Id: 16fa9000-919e-4187-a9ec-0bb0f1d68a03
	I0716 17:31:13.842321   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.842321   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.842321   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.842321   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.842321   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.842650   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:13.843133   14768 pod_ready.go:92] pod "kube-apiserver-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:13.843218   14768 pod_ready.go:81] duration metric: took 9.4297ms for pod "kube-apiserver-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.843218   14768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.843376   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804300
	I0716 17:31:13.843376   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.843376   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.843427   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.845094   14768 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 17:31:13.845094   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.845094   14768 round_trippers.go:580]     Audit-Id: ac6f4871-9eb4-4a45-acfb-7a2a6d64aac9
	I0716 17:31:13.845094   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.845094   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.845094   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.845094   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.845094   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.846183   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-804300","namespace":"kube-system","uid":"5471a5a2-6d9a-4eff-98f0-3f94d40f7749","resourceVersion":"616","creationTimestamp":"2024-07-17T00:28:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"122f213e2f85b06cfa52df268a4fedab","kubernetes.io/config.mirror":"122f213e2f85b06cfa52df268a4fedab","kubernetes.io/config.seen":"2024-07-17T00:28:20.245856519Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0716 17:31:13.846794   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.846794   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.846794   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.846794   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.849105   14768 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 17:31:13.849105   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.849105   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.849105   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.849214   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.849214   14768 round_trippers.go:580]     Audit-Id: 0ccd53d4-ca1b-4fe9-87f4-1f23f91e70be
	I0716 17:31:13.849214   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.849214   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.849214   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:13.849214   14768 pod_ready.go:92] pod "kube-controller-manager-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:13.849214   14768 pod_ready.go:81] duration metric: took 5.9953ms for pod "kube-controller-manager-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.849214   14768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4r9g4" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.849747   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-proxy-4r9g4
	I0716 17:31:13.849747   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.849747   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.849883   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.851929   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:13.851929   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.851929   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.852886   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.852886   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.852914   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.852914   14768 round_trippers.go:580]     Audit-Id: 1792cbde-6140-4dc8-87d2-bbff6c1aedcc
	I0716 17:31:13.852914   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.853371   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4r9g4","generateName":"kube-proxy-","namespace":"kube-system","uid":"693e7731-f132-4980-84c0-f0df321e1012","resourceVersion":"612","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3c28c2cc-b21c-4aa8-83a4-78acc60f8edf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c28c2cc-b21c-4aa8-83a4-78acc60f8edf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0716 17:31:13.853443   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.853443   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.853443   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.853443   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.856201   14768 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 17:31:13.856201   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.856201   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.856201   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.856201   14768 round_trippers.go:580]     Audit-Id: acd8a792-d6e8-41fe-897d-76678610446c
	I0716 17:31:13.856201   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.856201   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.856201   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.856479   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:13.856603   14768 pod_ready.go:92] pod "kube-proxy-4r9g4" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:13.856603   14768 pod_ready.go:81] duration metric: took 6.8556ms for pod "kube-proxy-4r9g4" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.856603   14768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:13.856603   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:13.856603   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.856603   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.856603   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.861628   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:13.861759   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.861781   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.861781   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.861781   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.861781   14768 round_trippers.go:580]     Audit-Id: 7b032c9f-9fdc-495e-a2f6-ecf766c4a6e1
	I0716 17:31:13.861781   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.861781   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.861781   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804300","namespace":"kube-system","uid":"5923bbf2-211a-4508-b912-bab732c092b8","resourceVersion":"589","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.mirror":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.seen":"2024-07-17T00:28:12.431718628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5591 chars]
	I0716 17:31:13.862520   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:13.862520   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:13.862520   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:13.862520   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:13.865779   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:13.865779   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:13.865779   14768 round_trippers.go:580]     Audit-Id: e0da2c00-7ef0-47cc-b782-86071f849387
	I0716 17:31:13.865779   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:13.865779   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:13.865779   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:13.865779   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:13.865779   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:13 GMT
	I0716 17:31:13.866571   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:14.357265   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:14.357265   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:14.357265   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:14.357265   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:14.360862   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:14.360862   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:14.360862   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:14.361178   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:14 GMT
	I0716 17:31:14.361178   14768 round_trippers.go:580]     Audit-Id: 6c5a019a-01dd-4dad-b066-f1cfca7c9a84
	I0716 17:31:14.361178   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:14.361178   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:14.361178   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:14.363154   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804300","namespace":"kube-system","uid":"5923bbf2-211a-4508-b912-bab732c092b8","resourceVersion":"589","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.mirror":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.seen":"2024-07-17T00:28:12.431718628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5591 chars]
	I0716 17:31:14.363691   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:14.363691   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:14.363691   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:14.363691   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:14.368057   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:14.368057   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:14.368057   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:14.368612   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:14 GMT
	I0716 17:31:14.368612   14768 round_trippers.go:580]     Audit-Id: f636ac69-8c82-4f3a-b0da-dba73af96e72
	I0716 17:31:14.368612   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:14.368612   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:14.368612   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:14.368895   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:14.856979   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:14.856979   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:14.856979   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:14.856979   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:14.860661   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:14.860661   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:14.861553   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:14 GMT
	I0716 17:31:14.861553   14768 round_trippers.go:580]     Audit-Id: 55fa1a50-2537-4a17-927a-8530e63092a7
	I0716 17:31:14.861553   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:14.861553   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:14.861553   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:14.861553   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:14.861855   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804300","namespace":"kube-system","uid":"5923bbf2-211a-4508-b912-bab732c092b8","resourceVersion":"589","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.mirror":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.seen":"2024-07-17T00:28:12.431718628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5591 chars]
	I0716 17:31:14.862612   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:14.862687   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:14.862687   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:14.862687   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:14.865999   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:14.865999   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:14.865999   14768 round_trippers.go:580]     Audit-Id: 62204701-1115-4259-93ff-10389c894ce8
	I0716 17:31:14.865999   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:14.865999   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:14.865999   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:14.865999   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:14.865999   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:14 GMT
	I0716 17:31:14.866517   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:15.362262   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:15.362372   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.362372   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.362372   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.367793   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:15.367793   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.367793   14768 round_trippers.go:580]     Audit-Id: 32da9c9e-e05a-49df-ab1d-cc05f9917841
	I0716 17:31:15.367793   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.367793   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.367793   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.367793   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.368184   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.368650   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804300","namespace":"kube-system","uid":"5923bbf2-211a-4508-b912-bab732c092b8","resourceVersion":"630","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.mirror":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.seen":"2024-07-17T00:28:12.431718628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5347 chars]
	I0716 17:31:15.369337   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:15.369412   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.369412   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.369412   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.371781   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:15.372855   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.372855   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.372855   14768 round_trippers.go:580]     Audit-Id: ff29f2f0-e870-4e26-bfa3-d80d3b56076f
	I0716 17:31:15.372855   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.372855   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.372855   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.372855   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.373040   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:15.373595   14768 pod_ready.go:92] pod "kube-scheduler-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:15.373595   14768 pod_ready.go:81] duration metric: took 1.5169859s for pod "kube-scheduler-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:15.373595   14768 pod_ready.go:38] duration metric: took 13.0984437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:31:15.373888   14768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:31:15.392785   14768 command_runner.go:130] > -16
	I0716 17:31:15.393462   14768 ops.go:34] apiserver oom_adj: -16
	I0716 17:31:15.393462   14768 kubeadm.go:597] duration metric: took 23.3762534s to restartPrimaryControlPlane
	I0716 17:31:15.393462   14768 kubeadm.go:394] duration metric: took 23.5059508s to StartCluster
	I0716 17:31:15.393551   14768 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:31:15.393817   14768 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:31:15.395204   14768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:31:15.396899   14768 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.236 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:31:15.396899   14768 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:31:15.396899   14768 addons.go:69] Setting storage-provisioner=true in profile "functional-804300"
	I0716 17:31:15.397240   14768 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:31:15.397240   14768 addons.go:234] Setting addon storage-provisioner=true in "functional-804300"
	W0716 17:31:15.397240   14768 addons.go:243] addon storage-provisioner should already be in state true
	I0716 17:31:15.396899   14768 addons.go:69] Setting default-storageclass=true in profile "functional-804300"
	I0716 17:31:15.397240   14768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-804300"
	I0716 17:31:15.397482   14768 host.go:66] Checking if "functional-804300" exists ...
	I0716 17:31:15.398340   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:31:15.398620   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:31:15.400857   14768 out.go:177] * Verifying Kubernetes components...
	I0716 17:31:15.420216   14768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:31:15.713366   14768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:31:15.745441   14768 node_ready.go:35] waiting up to 6m0s for node "functional-804300" to be "Ready" ...
	I0716 17:31:15.745745   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:15.745832   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.745832   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.745832   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.749526   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:15.749526   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.749603   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.749603   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.749700   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.749700   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.749700   14768 round_trippers.go:580]     Audit-Id: d7edeb88-8f7d-4269-a453-380d383ea35a
	I0716 17:31:15.749700   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.750685   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:15.751108   14768 node_ready.go:49] node "functional-804300" has status "Ready":"True"
	I0716 17:31:15.751108   14768 node_ready.go:38] duration metric: took 5.5574ms for node "functional-804300" to be "Ready" ...
	I0716 17:31:15.751108   14768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:31:15.751300   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:15.751300   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.751300   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.751300   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.759077   14768 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 17:31:15.759077   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.759077   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.759077   14768 round_trippers.go:580]     Audit-Id: 0b32b440-abcc-4e33-a880-4321214b53b9
	I0716 17:31:15.759077   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.759077   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.759077   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.759077   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.760154   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"614","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50614 chars]
	I0716 17:31:15.762803   14768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:15.763065   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z9r2k
	I0716 17:31:15.763065   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.763065   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.763132   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.765332   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:15.766334   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.766334   14768 round_trippers.go:580]     Audit-Id: e5512e9e-1dd2-4a90-97b8-9c9d4fe94b9f
	I0716 17:31:15.766334   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.766334   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.766334   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.766334   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.766334   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.766477   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"614","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0716 17:31:15.833597   14768 request.go:629] Waited for 66.298ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:15.833869   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:15.833977   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:15.833977   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:15.833977   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:15.839069   14768 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 17:31:15.839425   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:15.839425   14768 round_trippers.go:580]     Audit-Id: a2c92f2d-9bf2-46ce-b002-bd68cd958f39
	I0716 17:31:15.839425   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:15.839425   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:15.839425   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:15.839425   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:15.839425   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:15 GMT
	I0716 17:31:15.839425   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:15.840871   14768 pod_ready.go:92] pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:15.841013   14768 pod_ready.go:81] duration metric: took 78.1395ms for pod "coredns-7db6d8ff4d-z9r2k" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:15.841013   14768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:16.039299   14768 request.go:629] Waited for 197.931ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:16.039299   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804300
	I0716 17:31:16.039299   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:16.039299   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:16.039531   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:16.042821   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:16.042821   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:16.042821   14768 round_trippers.go:580]     Audit-Id: a000a400-9b39-45ac-81d0-ae6e97a3106f
	I0716 17:31:16.042821   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:16.042821   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:16.042821   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:16.042821   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:16.043598   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:16 GMT
	I0716 17:31:16.043787   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804300","namespace":"kube-system","uid":"972afb37-99e9-4387-b6a4-2c6d708a3bfd","resourceVersion":"628","creationTimestamp":"2024-07-17T00:28:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.236:2379","kubernetes.io/config.hash":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.mirror":"0a22d0af6cc0740e795b414759b089dd","kubernetes.io/config.seen":"2024-07-17T00:28:12.431713128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6187 chars]
	I0716 17:31:16.233426   14768 request.go:629] Waited for 188.5488ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:16.233652   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:16.233652   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:16.233853   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:16.233853   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:16.238119   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:16.238119   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:16.238119   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:16 GMT
	I0716 17:31:16.238119   14768 round_trippers.go:580]     Audit-Id: dff3712b-0190-4c73-85cc-90691a42abf3
	I0716 17:31:16.238119   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:16.238119   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:16.238119   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:16.238119   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:16.238914   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:16.239455   14768 pod_ready.go:92] pod "etcd-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:16.239455   14768 pod_ready.go:81] duration metric: took 398.3451ms for pod "etcd-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:16.239455   14768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:16.439083   14768 request.go:629] Waited for 199.2802ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804300
	I0716 17:31:16.439288   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804300
	I0716 17:31:16.439288   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:16.439288   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:16.439405   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:16.446834   14768 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 17:31:16.447427   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:16.447427   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:16 GMT
	I0716 17:31:16.447427   14768 round_trippers.go:580]     Audit-Id: a07a201d-6224-4e42-a06c-fbacb53de290
	I0716 17:31:16.447427   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:16.447427   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:16.447699   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:16.447699   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:16.448389   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-804300","namespace":"kube-system","uid":"3c09f919-6bd7-4bfe-928c-c394ae02b434","resourceVersion":"620","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.236:8441","kubernetes.io/config.hash":"f9b58d0560780a14b304507bf1dc73fe","kubernetes.io/config.mirror":"f9b58d0560780a14b304507bf1dc73fe","kubernetes.io/config.seen":"2024-07-17T00:28:12.431716628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0716 17:31:16.629942   14768 request.go:629] Waited for 180.4707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:16.630014   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:16.630014   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:16.630014   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:16.630014   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:16.633619   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:16.634108   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:16.634108   14768 round_trippers.go:580]     Audit-Id: 9df1d6a0-c816-4905-a78c-7dd0e6ece84a
	I0716 17:31:16.634108   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:16.634108   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:16.634108   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:16.634194   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:16.634194   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:16 GMT
	I0716 17:31:16.634386   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:16.634552   14768 pod_ready.go:92] pod "kube-apiserver-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:16.635113   14768 pod_ready.go:81] duration metric: took 395.6559ms for pod "kube-apiserver-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:16.635113   14768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:16.835781   14768 request.go:629] Waited for 200.5544ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804300
	I0716 17:31:16.836076   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804300
	I0716 17:31:16.836076   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:16.836076   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:16.836325   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:16.842623   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:16.842623   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:16.842623   14768 round_trippers.go:580]     Audit-Id: e06eb947-ec70-4155-bd3c-88f10c42a992
	I0716 17:31:16.842623   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:16.842623   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:16.842623   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:16.842623   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:16.842623   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:16 GMT
	I0716 17:31:16.843259   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-804300","namespace":"kube-system","uid":"5471a5a2-6d9a-4eff-98f0-3f94d40f7749","resourceVersion":"616","creationTimestamp":"2024-07-17T00:28:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"122f213e2f85b06cfa52df268a4fedab","kubernetes.io/config.mirror":"122f213e2f85b06cfa52df268a4fedab","kubernetes.io/config.seen":"2024-07-17T00:28:20.245856519Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0716 17:31:17.026148   14768 request.go:629] Waited for 181.9213ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.026382   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.026382   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.026445   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.026445   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.030121   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:17.030746   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.030746   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.030746   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.030746   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.030746   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.030746   14768 round_trippers.go:580]     Audit-Id: 482b59eb-1b9e-4e16-835b-2e0f28a95747
	I0716 17:31:17.030746   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.031052   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:17.031384   14768 pod_ready.go:92] pod "kube-controller-manager-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:17.031384   14768 pod_ready.go:81] duration metric: took 396.2694ms for pod "kube-controller-manager-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:17.031384   14768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4r9g4" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:17.234000   14768 request.go:629] Waited for 202.6156ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-proxy-4r9g4
	I0716 17:31:17.234000   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-proxy-4r9g4
	I0716 17:31:17.234277   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.234277   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.234277   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.238293   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:17.238928   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.238928   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.238928   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.238928   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.238928   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.238928   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.238928   14768 round_trippers.go:580]     Audit-Id: 7cf380f8-971e-46a5-b83b-ac0b6b9137b2
	I0716 17:31:17.239212   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4r9g4","generateName":"kube-proxy-","namespace":"kube-system","uid":"693e7731-f132-4980-84c0-f0df321e1012","resourceVersion":"612","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3c28c2cc-b21c-4aa8-83a4-78acc60f8edf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c28c2cc-b21c-4aa8-83a4-78acc60f8edf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0716 17:31:17.440725   14768 request.go:629] Waited for 200.959ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.440940   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.440940   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.440940   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.441004   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.444829   14768 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:31:17.444829   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.444829   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.444829   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.444829   14768 round_trippers.go:580]     Audit-Id: f20419f2-2839-47a3-94a1-a7fcad8be22f
	I0716 17:31:17.444829   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.444829   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.445272   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.445388   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:17.445388   14768 pod_ready.go:92] pod "kube-proxy-4r9g4" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:17.445388   14768 pod_ready.go:81] duration metric: took 414.002ms for pod "kube-proxy-4r9g4" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:17.445929   14768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:17.630601   14768 request.go:629] Waited for 184.3713ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:17.630601   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804300
	I0716 17:31:17.630808   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.630808   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.630808   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.631057   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:31:17.631057   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:17.631238   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:31:17.631299   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:17.631986   14768 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:31:17.632932   14768 kapi.go:59] client config for functional-804300: &rest.Config{Host:"https://172.27.170.236:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-804300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-804300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:31:17.634000   14768 addons.go:234] Setting addon default-storageclass=true in "functional-804300"
	W0716 17:31:17.634073   14768 addons.go:243] addon default-storageclass should already be in state true
	I0716 17:31:17.634156   14768 host.go:66] Checking if "functional-804300" exists ...
	I0716 17:31:17.635374   14768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:31:17.635522   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:31:17.639689   14768 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:31:17.639689   14768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:31:17.639689   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:31:17.639689   14768 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 17:31:17.639689   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.639689   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.639689   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.639689   14768 round_trippers.go:580]     Audit-Id: 9c686f99-c719-4b76-bbf7-58997ff1cb45
	I0716 17:31:17.639689   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.639689   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.639689   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.639689   14768 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804300","namespace":"kube-system","uid":"5923bbf2-211a-4508-b912-bab732c092b8","resourceVersion":"630","creationTimestamp":"2024-07-17T00:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.mirror":"8f9e809188c88ee7707e5fc5ec22f307","kubernetes.io/config.seen":"2024-07-17T00:28:12.431718628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5347 chars]
	I0716 17:31:17.836871   14768 request.go:629] Waited for 196.055ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.837046   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes/functional-804300
	I0716 17:31:17.837046   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.837134   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.837134   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.841329   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:17.841329   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.841329   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.841500   14768 round_trippers.go:580]     Audit-Id: 206549a9-0942-4aa2-8247-8921ac24478a
	I0716 17:31:17.841500   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.841500   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.841750   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.841954   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.842750   14768 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:28:16Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0716 17:31:17.843218   14768 pod_ready.go:92] pod "kube-scheduler-functional-804300" in "kube-system" namespace has status "Ready":"True"
	I0716 17:31:17.843218   14768 pod_ready.go:81] duration metric: took 397.287ms for pod "kube-scheduler-functional-804300" in "kube-system" namespace to be "Ready" ...
	I0716 17:31:17.843300   14768 pod_ready.go:38] duration metric: took 2.0921832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 17:31:17.843300   14768 api_server.go:52] waiting for apiserver process to appear ...
	I0716 17:31:17.857494   14768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 17:31:17.885904   14768 command_runner.go:130] > 5881
	I0716 17:31:17.886270   14768 api_server.go:72] duration metric: took 2.4893609s to wait for apiserver process to appear ...
	I0716 17:31:17.886270   14768 api_server.go:88] waiting for apiserver healthz status ...
	I0716 17:31:17.886270   14768 api_server.go:253] Checking apiserver healthz at https://172.27.170.236:8441/healthz ...
	I0716 17:31:17.895620   14768 api_server.go:279] https://172.27.170.236:8441/healthz returned 200:
	ok
	I0716 17:31:17.895734   14768 round_trippers.go:463] GET https://172.27.170.236:8441/version
	I0716 17:31:17.895734   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:17.895833   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:17.895833   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:17.899962   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:17.899962   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:17.899962   14768 round_trippers.go:580]     Content-Length: 263
	I0716 17:31:17.899962   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:17 GMT
	I0716 17:31:17.899962   14768 round_trippers.go:580]     Audit-Id: ac4cfba8-c9dc-47f5-8fde-49229c6814d7
	I0716 17:31:17.899962   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:17.899962   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:17.899962   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:17.899962   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:17.899962   14768 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 17:31:17.899962   14768 api_server.go:141] control plane version: v1.30.2
	I0716 17:31:17.899962   14768 api_server.go:131] duration metric: took 13.6925ms to wait for apiserver health ...
	I0716 17:31:17.900945   14768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 17:31:18.025928   14768 request.go:629] Waited for 124.7823ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:18.026049   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:18.026102   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:18.026102   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:18.026102   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:18.037439   14768 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 17:31:18.037439   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:18.037565   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:18.037565   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:18.037565   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:18 GMT
	I0716 17:31:18.037565   14768 round_trippers.go:580]     Audit-Id: 24bcf63c-09dd-40a0-8cf1-148493d544ef
	I0716 17:31:18.037565   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:18.037565   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:18.040312   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"614","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50614 chars]
	I0716 17:31:18.044282   14768 system_pods.go:59] 7 kube-system pods found
	I0716 17:31:18.044282   14768 system_pods.go:61] "coredns-7db6d8ff4d-z9r2k" [ba79f306-2c4d-4ee6-8622-1d2967c40c34] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "etcd-functional-804300" [972afb37-99e9-4387-b6a4-2c6d708a3bfd] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "kube-apiserver-functional-804300" [3c09f919-6bd7-4bfe-928c-c394ae02b434] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "kube-controller-manager-functional-804300" [5471a5a2-6d9a-4eff-98f0-3f94d40f7749] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "kube-proxy-4r9g4" [693e7731-f132-4980-84c0-f0df321e1012] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "kube-scheduler-functional-804300" [5923bbf2-211a-4508-b912-bab732c092b8] Running
	I0716 17:31:18.044282   14768 system_pods.go:61] "storage-provisioner" [c846d719-af54-492f-8e1a-b4bb2a912d7f] Running
	I0716 17:31:18.044282   14768 system_pods.go:74] duration metric: took 143.3372ms to wait for pod list to return data ...
	I0716 17:31:18.044282   14768 default_sa.go:34] waiting for default service account to be created ...
	I0716 17:31:18.231180   14768 request.go:629] Waited for 186.5953ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/default/serviceaccounts
	I0716 17:31:18.231376   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/default/serviceaccounts
	I0716 17:31:18.231376   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:18.231505   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:18.231505   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:18.238423   14768 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 17:31:18.238527   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:18.238595   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:18.238595   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:18.238595   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:18.238595   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:18.238698   14768 round_trippers.go:580]     Content-Length: 261
	I0716 17:31:18.238698   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:18 GMT
	I0716 17:31:18.238698   14768 round_trippers.go:580]     Audit-Id: 1d1ccf63-b64b-4687-b5c6-ba7be3456fdb
	I0716 17:31:18.238819   14768 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2d3b830a-6c70-47af-a8fe-313ccea5c181","resourceVersion":"336","creationTimestamp":"2024-07-17T00:28:34Z"}}]}
	I0716 17:31:18.239223   14768 default_sa.go:45] found service account: "default"
	I0716 17:31:18.239223   14768 default_sa.go:55] duration metric: took 194.9399ms for default service account to be created ...
	I0716 17:31:18.239223   14768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 17:31:18.437471   14768 request.go:629] Waited for 198.2474ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:18.438047   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/namespaces/kube-system/pods
	I0716 17:31:18.438119   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:18.438119   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:18.438119   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:18.443680   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:18.443680   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:18.443764   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:18.443764   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:18.443764   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:18.443764   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:18.443848   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:18 GMT
	I0716 17:31:18.443848   14768 round_trippers.go:580]     Audit-Id: ebe33f5e-2510-4215-87f5-2fa568fe1345
	I0716 17:31:18.445389   14768 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z9r2k","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ba79f306-2c4d-4ee6-8622-1d2967c40c34","resourceVersion":"614","creationTimestamp":"2024-07-17T00:28:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"55742393-d5d4-4f26-8add-d42bec3e0b11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:28:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55742393-d5d4-4f26-8add-d42bec3e0b11\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50614 chars]
	I0716 17:31:18.449366   14768 system_pods.go:86] 7 kube-system pods found
	I0716 17:31:18.449366   14768 system_pods.go:89] "coredns-7db6d8ff4d-z9r2k" [ba79f306-2c4d-4ee6-8622-1d2967c40c34] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "etcd-functional-804300" [972afb37-99e9-4387-b6a4-2c6d708a3bfd] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "kube-apiserver-functional-804300" [3c09f919-6bd7-4bfe-928c-c394ae02b434] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "kube-controller-manager-functional-804300" [5471a5a2-6d9a-4eff-98f0-3f94d40f7749] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "kube-proxy-4r9g4" [693e7731-f132-4980-84c0-f0df321e1012] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "kube-scheduler-functional-804300" [5923bbf2-211a-4508-b912-bab732c092b8] Running
	I0716 17:31:18.449474   14768 system_pods.go:89] "storage-provisioner" [c846d719-af54-492f-8e1a-b4bb2a912d7f] Running
	I0716 17:31:18.449474   14768 system_pods.go:126] duration metric: took 210.2501ms to wait for k8s-apps to be running ...
	I0716 17:31:18.449474   14768 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 17:31:18.466641   14768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 17:31:18.495751   14768 system_svc.go:56] duration metric: took 46.2765ms WaitForService to wait for kubelet
	I0716 17:31:18.495836   14768 kubeadm.go:582] duration metric: took 3.0989246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:31:18.495836   14768 node_conditions.go:102] verifying NodePressure condition ...
	I0716 17:31:18.627547   14768 request.go:629] Waited for 131.6302ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.236:8441/api/v1/nodes
	I0716 17:31:18.627675   14768 round_trippers.go:463] GET https://172.27.170.236:8441/api/v1/nodes
	I0716 17:31:18.627675   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:18.627675   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:18.627906   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:18.632523   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:18.632523   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:18.632523   14768 round_trippers.go:580]     Audit-Id: eb7bf61c-f638-4a8e-9270-2f325c3be0d8
	I0716 17:31:18.632597   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:18.632597   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:18.632597   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:18.632597   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:18.632597   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:18 GMT
	I0716 17:31:18.633305   14768 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"functional-804300","uid":"59e2d40d-89d3-474d-b8f1-0a64060deea1","resourceVersion":"547","creationTimestamp":"2024-07-17T00:28:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"functional-804300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T17_28_20_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0716 17:31:18.633953   14768 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 17:31:18.633953   14768 node_conditions.go:123] node cpu capacity is 2
	I0716 17:31:18.633953   14768 node_conditions.go:105] duration metric: took 138.1159ms to run NodePressure ...
	I0716 17:31:18.633953   14768 start.go:241] waiting for startup goroutines ...
	I0716 17:31:19.852619   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:31:19.852728   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:19.852728   14768 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:31:19.852728   14768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:31:19.852728   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
	I0716 17:31:19.854363   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:31:19.854446   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:19.854732   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:31:22.075298   14768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:31:22.075298   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:22.076044   14768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
	I0716 17:31:22.460929   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:31:22.461074   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:22.461296   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:31:22.601566   14768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:31:23.387721   14768 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0716 17:31:23.387762   14768 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0716 17:31:23.387762   14768 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0716 17:31:23.387762   14768 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0716 17:31:23.387762   14768 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0716 17:31:23.387762   14768 command_runner.go:130] > pod/storage-provisioner configured
	I0716 17:31:24.573824   14768 main.go:141] libmachine: [stdout =====>] : 172.27.170.236
	
	I0716 17:31:24.573824   14768 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:31:24.574718   14768 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
	I0716 17:31:24.698317   14768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:31:24.844818   14768 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0716 17:31:24.845135   14768 round_trippers.go:463] GET https://172.27.170.236:8441/apis/storage.k8s.io/v1/storageclasses
	I0716 17:31:24.845135   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:24.845239   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:24.845262   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:24.848245   14768 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 17:31:24.848245   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:24.848615   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:24.848615   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:24.848615   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:24.848615   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:24.848615   14768 round_trippers.go:580]     Content-Length: 1273
	I0716 17:31:24.848615   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:24 GMT
	I0716 17:31:24.848615   14768 round_trippers.go:580]     Audit-Id: ec90cce2-d4dc-4294-a52d-e098ba53aa4d
	I0716 17:31:24.848752   14768 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"637"},"items":[{"metadata":{"name":"standard","uid":"fce821ab-1362-4cfa-a33d-ca1c0a6970a3","resourceVersion":"436","creationTimestamp":"2024-07-17T00:28:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:28:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 17:31:24.849563   14768 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fce821ab-1362-4cfa-a33d-ca1c0a6970a3","resourceVersion":"436","creationTimestamp":"2024-07-17T00:28:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:28:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 17:31:24.849647   14768 round_trippers.go:463] PUT https://172.27.170.236:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:31:24.849647   14768 round_trippers.go:469] Request Headers:
	I0716 17:31:24.849707   14768 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:31:24.849707   14768 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:31:24.849707   14768 round_trippers.go:473]     Content-Type: application/json
	I0716 17:31:24.854587   14768 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 17:31:24.854587   14768 round_trippers.go:577] Response Headers:
	I0716 17:31:24.854587   14768 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d9a3663f-23bd-442c-a73d-362e884e1219
	I0716 17:31:24.854587   14768 round_trippers.go:580]     Content-Length: 1220
	I0716 17:31:24.854587   14768 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:31:24 GMT
	I0716 17:31:24.854587   14768 round_trippers.go:580]     Audit-Id: c262bb5f-fae1-416f-864a-36e69b0d3ebf
	I0716 17:31:24.854587   14768 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 17:31:24.854587   14768 round_trippers.go:580]     Content-Type: application/json
	I0716 17:31:24.854587   14768 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 74ddbc4a-17d5-4dda-9b7b-ef5a06881fe4
	I0716 17:31:24.854587   14768 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fce821ab-1362-4cfa-a33d-ca1c0a6970a3","resourceVersion":"436","creationTimestamp":"2024-07-17T00:28:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:28:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 17:31:24.858705   14768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:31:24.860704   14768 addons.go:510] duration metric: took 9.463766s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:31:24.860704   14768 start.go:246] waiting for cluster config update ...
	I0716 17:31:24.861704   14768 start.go:255] writing updated cluster config ...
	I0716 17:31:24.872702   14768 ssh_runner.go:195] Run: rm -f paused
	I0716 17:31:25.005383   14768 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0716 17:31:25.008307   14768 out.go:177] * Done! kubectl is now configured to use "functional-804300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.675940823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.676021221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.694339850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.694665745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.694876642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.695589331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.718095497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.718327793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.718371093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 dockerd[4427]: time="2024-07-17T00:31:01.718634789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:01 functional-804300 cri-dockerd[4705]: time="2024-07-17T00:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/385c2df85f330b64a90be809706feac531f0123785a8834aebb036e064dbc453/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:31:01 functional-804300 cri-dockerd[4705]: time="2024-07-17T00:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7c43987a8c176a0c753b0b6a447cf0b32e49ec7a235e61a5b79196a46fa29073/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:31:02 functional-804300 cri-dockerd[4705]: time="2024-07-17T00:31:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fae92df09b7b9cfb111e7520f3daee67cc27bc83d9984a366d08cf8a828a88af/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.261501237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.262026931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.262179829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.262591724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.297764495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.298000292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.298101991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.303124929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.607945310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.608037909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.608049809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:31:02 functional-804300 dockerd[4427]: time="2024-07-17T00:31:02.608501903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef5c984814906       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   fae92df09b7b9       coredns-7db6d8ff4d-z9r2k
	887b1ab02a5d7       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   7c43987a8c176       storage-provisioner
	0e3a3a5d7ba58       53c535741fb44       2 minutes ago       Running             kube-proxy                2                   385c2df85f330       kube-proxy-4r9g4
	9a42bdaae501f       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   cbe0c8ddf0aec       etcd-functional-804300
	c4d936201b0aa       7820c83aa1394       2 minutes ago       Running             kube-scheduler            2                   787788385c6b2       kube-scheduler-functional-804300
	388e7d376d0ea       e874818b3caac       2 minutes ago       Running             kube-controller-manager   2                   c556c8e09a9e9       kube-controller-manager-functional-804300
	79fff3cffb4b0       56ce0fd9fb532       2 minutes ago       Running             kube-apiserver            2                   a15ee6fbf3501       kube-apiserver-functional-804300
	55a1851f1e5ff       3861cfcd7c04c       2 minutes ago       Created             etcd                      1                   e7e74e4598e2d       etcd-functional-804300
	887c015b65c0d       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   390ac1cbe2d69       storage-provisioner
	440505e4cb791       56ce0fd9fb532       2 minutes ago       Created             kube-apiserver            1                   cdb08843a6a3d       kube-apiserver-functional-804300
	5f38b69ec3696       7820c83aa1394       2 minutes ago       Created             kube-scheduler            1                   ce84b5baad40e       kube-scheduler-functional-804300
	42e52c71b7371       53c535741fb44       2 minutes ago       Created             kube-proxy                1                   caf1d24136959       kube-proxy-4r9g4
	cff7061e1bedd       e874818b3caac       2 minutes ago       Exited              kube-controller-manager   1                   72ad6f86d4f3f       kube-controller-manager-functional-804300
	dd6946e9d4e10       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   b83305d7883c4       coredns-7db6d8ff4d-z9r2k
	
	
	==> coredns [dd6946e9d4e1] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[657857395]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:28:36.826) (total time: 30000ms):
	Trace[657857395]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:29:06.826)
	Trace[657857395]: [30.000937087s] [30.000937087s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1200031725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:28:36.825) (total time: 30002ms):
	Trace[1200031725]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:29:06.826)
	Trace[1200031725]: [30.002774499s] [30.002774499s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[784771741]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:28:36.826) (total time: 30001ms):
	Trace[784771741]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:29:06.827)
	Trace[784771741]: [30.0019343s] [30.0019343s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46959 - 11281 "HINFO IN 4659576018938304388.5201357185765050317. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046388072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef5c98481490] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60255 - 28750 "HINFO IN 2140925863941419637.7016072869805808738. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041692691s
	
	
	==> describe nodes <==
	Name:               functional-804300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-804300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=functional-804300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_28_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:28:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-804300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:33:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:33:02 +0000   Wed, 17 Jul 2024 00:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:33:02 +0000   Wed, 17 Jul 2024 00:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:33:02 +0000   Wed, 17 Jul 2024 00:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:33:02 +0000   Wed, 17 Jul 2024 00:28:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.236
	  Hostname:    functional-804300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7e123aa2433476e9b42e7c32ba4f43f
	  System UUID:                05c425dd-d332-b84e-9552-f1da372f1249
	  Boot ID:                    9fda111a-56b2-490a-952a-7b69a63f5975
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-z9r2k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m34s
	  kube-system                 etcd-functional-804300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-apiserver-functional-804300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-functional-804300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-4r9g4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-functional-804300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node functional-804300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node functional-804300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node functional-804300 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet          Node functional-804300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet          Node functional-804300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet          Node functional-804300 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m43s                  kubelet          Node functional-804300 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node functional-804300 event: Registered Node functional-804300 in Controller
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node functional-804300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node functional-804300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x7 over 2m13s)  kubelet          Node functional-804300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                   node-controller  Node functional-804300 event: Registered Node functional-804300 in Controller
	
	
	==> dmesg <==
	[  +0.744896] systemd-fstab-generator[1677]: Ignoring "noauto" option for root device
	[  +7.320480] systemd-fstab-generator[1884]: Ignoring "noauto" option for root device
	[  +0.097956] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.058713] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +0.121368] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.976668] systemd-fstab-generator[2514]: Ignoring "noauto" option for root device
	[  +0.207395] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.987315] kauditd_printk_skb: 88 callbacks suppressed
	[Jul17 00:29] kauditd_printk_skb: 10 callbacks suppressed
	[Jul17 00:30] hrtimer: interrupt took 3696535 ns
	[ +25.257726] systemd-fstab-generator[3937]: Ignoring "noauto" option for root device
	[  +0.675562] systemd-fstab-generator[3973]: Ignoring "noauto" option for root device
	[  +0.263748] systemd-fstab-generator[3985]: Ignoring "noauto" option for root device
	[  +0.326709] systemd-fstab-generator[3999]: Ignoring "noauto" option for root device
	[  +5.417764] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.936762] systemd-fstab-generator[4654]: Ignoring "noauto" option for root device
	[  +0.208067] systemd-fstab-generator[4665]: Ignoring "noauto" option for root device
	[  +0.201982] systemd-fstab-generator[4677]: Ignoring "noauto" option for root device
	[  +0.276842] systemd-fstab-generator[4692]: Ignoring "noauto" option for root device
	[  +0.853237] systemd-fstab-generator[4864]: Ignoring "noauto" option for root device
	[  +4.180602] systemd-fstab-generator[5492]: Ignoring "noauto" option for root device
	[  +0.097110] kauditd_printk_skb: 180 callbacks suppressed
	[Jul17 00:31] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.335864] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.393355] systemd-fstab-generator[6525]: Ignoring "noauto" option for root device
	
	
	==> etcd [55a1851f1e5f] <==
	
	
	==> etcd [9a42bdaae501] <==
	{"level":"info","ts":"2024-07-17T00:30:57.244084Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T00:30:57.244159Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T00:30:57.244491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 switched to configuration voters=(14885484314646307000)"}
	{"level":"info","ts":"2024-07-17T00:30:57.246899Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8fb4009b45b76f87","local-member-id":"ce93dd1e142460b8","added-peer-id":"ce93dd1e142460b8","added-peer-peer-urls":["https://172.27.170.236:2380"]}
	{"level":"info","ts":"2024-07-17T00:30:57.247161Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8fb4009b45b76f87","local-member-id":"ce93dd1e142460b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:30:57.247267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:30:57.27169Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T00:30:57.274621Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce93dd1e142460b8","initial-advertise-peer-urls":["https://172.27.170.236:2380"],"listen-peer-urls":["https://172.27.170.236:2380"],"advertise-client-urls":["https://172.27.170.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.170.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T00:30:57.273995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.170.236:2380"}
	{"level":"info","ts":"2024-07-17T00:30:57.276198Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.170.236:2380"}
	{"level":"info","ts":"2024-07-17T00:30:57.276457Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T00:30:58.442674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T00:30:58.443314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T00:30:58.443799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 received MsgPreVoteResp from ce93dd1e142460b8 at term 2"}
	{"level":"info","ts":"2024-07-17T00:30:58.444107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:30:58.444265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 received MsgVoteResp from ce93dd1e142460b8 at term 3"}
	{"level":"info","ts":"2024-07-17T00:30:58.444496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce93dd1e142460b8 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T00:30:58.444656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce93dd1e142460b8 elected leader ce93dd1e142460b8 at term 3"}
	{"level":"info","ts":"2024-07-17T00:30:58.456973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:30:58.459225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:30:58.467156Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce93dd1e142460b8","local-member-attributes":"{Name:functional-804300 ClientURLs:[https://172.27.170.236:2379]}","request-path":"/0/members/ce93dd1e142460b8/attributes","cluster-id":"8fb4009b45b76f87","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:30:58.467464Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:30:58.467901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:30:58.4681Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:30:58.470073Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.170.236:2379"}
	
	
	==> kernel <==
	 00:33:08 up 6 min,  0 users,  load average: 0.69, 0.44, 0.21
	Linux functional-804300 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [440505e4cb79] <==
	
	
	==> kube-apiserver [79fff3cffb4b] <==
	I0717 00:31:00.029950       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:31:00.036063       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:31:00.036676       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:31:00.036998       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:31:00.037310       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:31:00.037963       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:31:00.038175       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:31:00.038316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:31:00.038620       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:31:00.100172       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:31:00.111127       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:31:00.111221       1 policy_source.go:224] refreshing policies
	I0717 00:31:00.121574       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:31:00.122429       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:31:00.137423       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 00:31:00.155399       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:31:00.942913       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 00:31:01.495629       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.236]
	I0717 00:31:01.497802       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:31:01.508389       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:31:01.916566       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:31:01.944021       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:31:02.041212       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:31:02.189647       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:31:02.217214       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [388e7d376d0e] <==
	I0717 00:31:12.890773       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0717 00:31:12.893872       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0717 00:31:12.893883       1 shared_informer.go:320] Caches are synced for cronjob
	I0717 00:31:12.896902       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0717 00:31:12.897165       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 00:31:12.901806       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 00:31:12.903766       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 00:31:12.907149       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0717 00:31:12.907430       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0717 00:31:12.907842       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0717 00:31:12.908089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0717 00:31:12.909409       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 00:31:12.925619       1 shared_informer.go:320] Caches are synced for expand
	I0717 00:31:12.958391       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 00:31:12.963332       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 00:31:12.981789       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0717 00:31:12.982049       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 00:31:12.991007       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 00:31:13.006620       1 shared_informer.go:320] Caches are synced for deployment
	I0717 00:31:13.015313       1 shared_informer.go:320] Caches are synced for disruption
	I0717 00:31:13.054640       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:31:13.099914       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:31:13.532448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:31:13.581592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:31:13.581878       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [cff7061e1bed] <==
	
	
	==> kube-proxy [0e3a3a5d7ba5] <==
	I0717 00:31:02.540588       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:31:02.553522       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.236"]
	I0717 00:31:02.621296       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:31:02.621340       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:31:02.621357       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:31:02.629349       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:31:02.629552       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:31:02.629625       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:02.631324       1 config.go:192] "Starting service config controller"
	I0717 00:31:02.631375       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:31:02.631415       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:31:02.631420       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:31:02.633476       1 config.go:319] "Starting node config controller"
	I0717 00:31:02.633506       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:31:02.731659       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:31:02.732019       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:31:02.733864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [42e52c71b737] <==
	
	
	==> kube-scheduler [5f38b69ec369] <==
	
	
	==> kube-scheduler [c4d936201b0a] <==
	I0717 00:30:58.171516       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:31:00.086875       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 00:31:00.086954       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:00.091763       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 00:31:00.091797       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 00:31:00.091934       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 00:31:00.092019       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 00:31:00.092141       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 00:31:00.092166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 00:31:00.092880       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 00:31:00.093124       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:31:00.192603       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 00:31:00.192678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 00:31:00.192613       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.253748    5499 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.255174    5499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.919350    5499 apiserver.go:52] "Watching apiserver"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.924218    5499 topology_manager.go:215] "Topology Admit Handler" podUID="ba79f306-2c4d-4ee6-8622-1d2967c40c34" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z9r2k"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.927320    5499 topology_manager.go:215] "Topology Admit Handler" podUID="693e7731-f132-4980-84c0-f0df321e1012" podNamespace="kube-system" podName="kube-proxy-4r9g4"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.928053    5499 topology_manager.go:215] "Topology Admit Handler" podUID="c846d719-af54-492f-8e1a-b4bb2a912d7f" podNamespace="kube-system" podName="storage-provisioner"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.942096    5499 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.949956    5499 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c846d719-af54-492f-8e1a-b4bb2a912d7f-tmp\") pod \"storage-provisioner\" (UID: \"c846d719-af54-492f-8e1a-b4bb2a912d7f\") " pod="kube-system/storage-provisioner"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.950427    5499 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693e7731-f132-4980-84c0-f0df321e1012-xtables-lock\") pod \"kube-proxy-4r9g4\" (UID: \"693e7731-f132-4980-84c0-f0df321e1012\") " pod="kube-system/kube-proxy-4r9g4"
	Jul 17 00:31:00 functional-804300 kubelet[5499]: I0717 00:31:00.950517    5499 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693e7731-f132-4980-84c0-f0df321e1012-lib-modules\") pod \"kube-proxy-4r9g4\" (UID: \"693e7731-f132-4980-84c0-f0df321e1012\") " pod="kube-system/kube-proxy-4r9g4"
	Jul 17 00:31:01 functional-804300 kubelet[5499]: I0717 00:31:01.970540    5499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c43987a8c176a0c753b0b6a447cf0b32e49ec7a235e61a5b79196a46fa29073"
	Jul 17 00:31:02 functional-804300 kubelet[5499]: I0717 00:31:02.095219    5499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae92df09b7b9cfb111e7520f3daee67cc27bc83d9984a366d08cf8a828a88af"
	Jul 17 00:31:02 functional-804300 kubelet[5499]: I0717 00:31:02.105592    5499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385c2df85f330b64a90be809706feac531f0123785a8834aebb036e064dbc453"
	Jul 17 00:31:04 functional-804300 kubelet[5499]: I0717 00:31:04.181168    5499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 00:31:06 functional-804300 kubelet[5499]: I0717 00:31:06.628368    5499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 00:31:54 functional-804300 kubelet[5499]: E0717 00:31:54.995259    5499 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:31:54 functional-804300 kubelet[5499]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:31:54 functional-804300 kubelet[5499]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:31:54 functional-804300 kubelet[5499]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:31:54 functional-804300 kubelet[5499]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:32:54 functional-804300 kubelet[5499]: E0717 00:32:54.989040    5499 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:32:54 functional-804300 kubelet[5499]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:32:54 functional-804300 kubelet[5499]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:32:54 functional-804300 kubelet[5499]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:32:54 functional-804300 kubelet[5499]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [887b1ab02a5d] <==
	I0717 00:31:02.515037       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:31:02.530306       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:31:02.531097       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:31:19.950203       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:31:19.950908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7e66389-cc07-45b2-aa15-5bca6f9c576b", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-804300_747f27a2-9738-4b3c-88a9-d7c552b8fb64 became leader
	I0717 00:31:19.951092       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-804300_747f27a2-9738-4b3c-88a9-d7c552b8fb64!
	I0717 00:31:20.051759       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-804300_747f27a2-9738-4b3c-88a9-d7c552b8fb64!
	
	
	==> storage-provisioner [887c015b65c0] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:33:00.223402    7388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-804300 -n functional-804300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-804300 -n functional-804300: (12.0411352s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-804300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (33.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config unset cpus" to be -""- but got *"W0716 17:36:05.885231   14348 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 config get cpus: exit status 14 (166.0382ms)

                                                
                                                
** stderr ** 
	W0716 17:36:06.083663   13644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0716 17:36:06.083663   13644 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0716 17:36:06.255840    8136 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config get cpus" to be -""- but got *"W0716 17:36:06.447660    3700 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config unset cpus" to be -""- but got *"W0716 17:36:06.609043    9776 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 config get cpus: exit status 14 (165.1331ms)

                                                
                                                
** stderr ** 
	W0716 17:36:06.779064    7792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-804300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0716 17:36:06.779064    7792 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 service --namespace=default --https --url hello-node: exit status 1 (15.0235319s)

                                                
                                                
** stderr ** 
	W0716 17:36:48.403506    8704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-804300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url --format={{.IP}}: exit status 1 (15.0129207s)

                                                
                                                
** stderr ** 
	W0716 17:37:03.423350    2556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url: exit status 1 (15.0372887s)

                                                
                                                
** stderr ** 
	W0716 17:37:18.443122   12964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-804300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (448.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-339000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0716 17:44:00.807175    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:46:05.787021    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:05.801876    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:05.817586    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:05.849123    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:05.895948    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:05.990531    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:06.161443    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:06.490770    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:07.132906    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:08.427838    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:10.992160    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:16.119357    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:26.363901    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:46:46.851579    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:47:27.814700    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:48:49.744017    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:49:00.799939    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-339000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: exit status 90 (6m55.154228s)

                                                
                                                
-- stdout --
	* [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.27.164.29
	  - NO_PROXY=172.27.164.29
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:43:02.510371    3116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	* 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe start -p ha-339000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (11.7267973s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.1561363s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image          | functional-804300 image rm                                            | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | docker.io/kicbase/echo-server:functional-804300                       |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| ssh            | functional-804300 ssh sudo cat                                        | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | /usr/share/ca-certificates/47402.pem                                  |                   |                   |         |                     |                     |
	| image          | functional-804300 image ls                                            | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	| ssh            | functional-804300 ssh sudo cat                                        | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                             |                   |                   |         |                     |                     |
	| image          | functional-804300 image load                                          | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| docker-env     | functional-804300 docker-env                                          | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	| image          | functional-804300 image ls                                            | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	| ssh            | functional-804300 ssh sudo cat                                        | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | /etc/test/nested/copy/4740/hosts                                      |                   |                   |         |                     |                     |
	| image          | functional-804300 image save --daemon                                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT | 16 Jul 24 17:38 PDT |
	|                | docker.io/kicbase/echo-server:functional-804300                       |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel         | functional-804300 tunnel                                              | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel         | functional-804300 tunnel                                              | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:38 PDT |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel         | functional-804300 tunnel                                              | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| service        | functional-804300 service                                             | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | hello-node-connect --url                                              |                   |                   |         |                     |                     |
	| update-context | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | update-context                                                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |                   |         |                     |                     |
	| update-context | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | update-context                                                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |                   |         |                     |                     |
	| update-context | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | update-context                                                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |                   |         |                     |                     |
	| image          | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | image ls --format short                                               |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image          | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:39 PDT |
	|                | image ls --format yaml                                                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| ssh            | functional-804300 ssh pgrep                                           | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT |                     |
	|                | buildkitd                                                             |                   |                   |         |                     |                     |
	| image          | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:40 PDT |
	|                | image ls --format json                                                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image          | functional-804300 image build -t                                      | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:39 PDT | 16 Jul 24 17:40 PDT |
	|                | localhost/my-image:functional-804300                                  |                   |                   |         |                     |                     |
	|                | testdata\build --alsologtostderr                                      |                   |                   |         |                     |                     |
	| image          | functional-804300                                                     | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:40 PDT | 16 Jul 24 17:40 PDT |
	|                | image ls --format table                                               |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image          | functional-804300 image ls                                            | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:40 PDT | 16 Jul 24 17:40 PDT |
	| delete         | -p functional-804300                                                  | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:41 PDT | 16 Jul 24 17:43 PDT |
	| start          | -p ha-339000 --wait=true                                              | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:43 PDT |                     |
	|                | --memory=2200 --ha                                                    |                   |                   |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.178970292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.179733794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.287787373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.287955874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.287991674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.288529075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.312510537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.312602938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.312624238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.312941738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7c292d2d62a8d       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493            3 minutes ago       Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                       3 minutes ago       Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   4 minutes ago       Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                       4 minutes ago       Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                       4 minutes ago       Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                       4 minutes ago       Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                       4 minutes ago       Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:50:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:46:40 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:46:40 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:46:40 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:46:40 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m57s
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m57s
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m55s                  kube-proxy       
	  Normal  Starting                 4m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-339000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.833490] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.668916] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:45:59.84711Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.164.29:2380"}
	{"level":"info","ts":"2024-07-17T00:46:00.089685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T00:46:00.089744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T00:46:00.089796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 received MsgPreVoteResp from d327875f867c6209 at term 1"}
	{"level":"info","ts":"2024-07-17T00:46:00.089832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.089918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 received MsgVoteResp from d327875f867c6209 at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.089951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.089978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d327875f867c6209 elected leader d327875f867c6209 at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.101952Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.122119Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d327875f867c6209","local-member-attributes":"{Name:ha-339000 ClientURLs:[https://172.27.164.29:2379]}","request-path":"/0/members/d327875f867c6209/attributes","cluster-id":"afb8b16c14f756c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:46:00.122495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.122581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.13562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.135705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.168688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb8b16c14f756c4","local-member-id":"d327875f867c6209","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.168948Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.1787Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:50:17 up 6 min,  0 users,  load average: 0.20, 0.31, 0.16
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 00:48:17.433292       1 main.go:303] handling current node
	I0717 00:48:27.427659       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:48:27.427719       1 main.go:303] handling current node
	I0717 00:48:37.434206       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:48:37.434312       1 main.go:303] handling current node
	I0717 00:48:47.436737       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:48:47.437153       1 main.go:303] handling current node
	I0717 00:48:57.435128       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:48:57.435512       1 main.go:303] handling current node
	I0717 00:49:07.434079       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:07.434473       1 main.go:303] handling current node
	I0717 00:49:17.430852       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:17.430951       1 main.go:303] handling current node
	I0717 00:49:27.427634       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:27.427760       1 main.go:303] handling current node
	I0717 00:49:37.432219       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:37.432334       1 main.go:303] handling current node
	I0717 00:49:47.436106       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:47.436208       1 main.go:303] handling current node
	I0717 00:49:57.435567       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:49:57.435670       1 main.go:303] handling current node
	I0717 00:50:07.432945       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:50:07.433084       1 main.go:303] handling current node
	I0717 00:50:17.434211       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 00:50:17.434584       1 main.go:303] handling current node
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:03.469010       1 policy_source.go:224] refreshing policies
	I0717 00:46:03.490844       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:46:03.491034       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:46:03.499730       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 00:46:03.671394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:19.669123       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 00:46:19.715538       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 00:46:19.725600       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0717 00:46:19.760666       1 shared_informer.go:320] Caches are synced for disruption
	I0717 00:46:19.787853       1 shared_informer.go:320] Caches are synced for HPA
	I0717 00:46:19.810929       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:19.834028       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:20.270902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.270997       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:46:20.279704       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.756683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.791759ms"
	I0717 00:46:20.809935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.984319ms"
	I0717 00:46:20.810136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.601µs"
	I0717 00:46:20.810666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="259.402µs"
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:46:40 ha-339000 kubelet[2368]: I0717 00:46:40.563694    2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2s6\" (UniqueName: \"kubernetes.io/projected/9d6e457d-c2dd-4d34-9593-5499996b3dcb-kube-api-access-fz2s6\") pod \"coredns-7db6d8ff4d-tnbkg\" (UID: \"9d6e457d-c2dd-4d34-9593-5499996b3dcb\") " pod="kube-system/coredns-7db6d8ff4d-tnbkg"
	Jul 17 00:46:40 ha-339000 kubelet[2368]: I0717 00:46:40.563719    2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d7bf2e2-1953-42b7-8189-94c09e9fde24-config-volume\") pod \"coredns-7db6d8ff4d-fnphs\" (UID: \"0d7bf2e2-1953-42b7-8189-94c09e9fde24\") " pod="kube-system/coredns-7db6d8ff4d-fnphs"
	Jul 17 00:46:40 ha-339000 kubelet[2368]: I0717 00:46:40.563738    2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xtv\" (UniqueName: \"kubernetes.io/projected/0d7bf2e2-1953-42b7-8189-94c09e9fde24-kube-api-access-p7xtv\") pod \"coredns-7db6d8ff4d-fnphs\" (UID: \"0d7bf2e2-1953-42b7-8189-94c09e9fde24\") " pod="kube-system/coredns-7db6d8ff4d-fnphs"
	Jul 17 00:46:42 ha-339000 kubelet[2368]: I0717 00:46:42.881883    2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tnbkg" podStartSLOduration=22.881865801 podStartE2EDuration="22.881865801s" podCreationTimestamp="2024-07-17 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 00:46:42.881422903 +0000 UTC m=+35.361223181" watchObservedRunningTime="2024-07-17 00:46:42.881865801 +0000 UTC m=+35.361666079"
	Jul 17 00:46:43 ha-339000 kubelet[2368]: I0717 00:46:43.028641    2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fnphs" podStartSLOduration=23.028625548 podStartE2EDuration="23.028625548s" podCreationTimestamp="2024-07-17 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 00:46:43.02805725 +0000 UTC m=+35.507857528" watchObservedRunningTime="2024-07-17 00:46:43.028625548 +0000 UTC m=+35.508425826"
	Jul 17 00:47:07 ha-339000 kubelet[2368]: E0717 00:47:07.786533    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:47:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:47:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:47:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:47:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:48:07 ha-339000 kubelet[2368]: E0717 00:48:07.788320    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:48:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:48:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:48:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:48:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:49:07 ha-339000 kubelet[2368]: E0717 00:49:07.787038    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:49:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:49:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:49:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:49:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:50:07 ha-339000 kubelet[2368]: E0717 00:50:07.793072    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:50:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:50:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:50:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:50:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7cb40bd8f4a4] <==
	I0717 00:46:42.153764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:46:42.175980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:46:42.177529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:46:42.200238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:46:42.200622       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	I0717 00:46:42.204971       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8dc52c6-8b6a-4e66-9d75-dd4099bee1cb", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a became leader
	I0717 00:46:42.301686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:50:09.971093    6404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (11.7923021s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (448.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (755.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- rollout status deployment/busybox
E0716 17:51:05.780027    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:51:33.599690    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:54:00.806868    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:56:05.784251    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 17:57:04.012325    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:59:00.798343    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- rollout status deployment/busybox: exit status 1 (10m3.8477914s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:50:31.318938    8812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:35.162400    8136 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:36.412505   14656 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:38.039106    9500 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:41.600654    7684 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:43.960623    8148 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:00:50.935766    2600 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:01:00.897515    3716 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0716 18:01:05.790603    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:01:11.998047   13640 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:01:23.120377    4388 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:01:42.503560    2700 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:02:27.062988   10396 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0716 18:02:27.062988   10396 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- nslookup kubernetes.io
E0716 18:02:28.965505    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- nslookup kubernetes.io: (1.7763659s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.io: exit status 1 (331.0731ms)

                                                
                                                
** stderr ** 
	W0716 18:02:29.514984   14764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-7zvzh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-7zvzh could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.io: exit status 1 (321.0654ms)

                                                
                                                
** stderr ** 
	W0716 18:02:29.852381    9504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-8tbsm does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-8tbsm could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.default: exit status 1 (324.3595ms)

                                                
                                                
** stderr ** 
	W0716 18:02:30.694909   13872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-7zvzh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-7zvzh could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.default: exit status 1 (381.9006ms)

                                                
                                                
** stderr ** 
	W0716 18:02:31.026333    7112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-8tbsm does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-8tbsm could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (345.186ms)

                                                
                                                
** stderr ** 
	W0716 18:02:31.853006    9068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-7zvzh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-7zvzh could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (362.1391ms)

                                                
                                                
** stderr ** 
	W0716 18:02:32.199153   13684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-8tbsm does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-8tbsm could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.2700157s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.6206108s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p functional-804300                 | functional-804300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:41 PDT | 16 Jul 24 17:43 PDT |
	| start   | -p ha-339000 --wait=true             | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:43 PDT |                     |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- apply -f             | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:50 PDT | 16 Jul 24 17:50 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- rollout status       | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:50 PDT |                     |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              16 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         16 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     16 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         16 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         16 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         16 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:02:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.668916] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:00.089951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.089978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d327875f867c6209 elected leader d327875f867c6209 at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.101952Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.122119Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d327875f867c6209","local-member-attributes":"{Name:ha-339000 ClientURLs:[https://172.27.164.29:2379]}","request-path":"/0/members/d327875f867c6209/attributes","cluster-id":"afb8b16c14f756c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:46:00.122495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.122581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.13562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.135705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.168688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb8b16c14f756c4","local-member-id":"d327875f867c6209","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.168948Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.1787Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	
	
	==> kernel <==
	 01:02:52 up 18 min,  0 users,  load average: 0.32, 0.47, 0.37
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:00:47.427773       1 main.go:303] handling current node
	I0717 01:00:57.429108       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:00:57.429298       1 main.go:303] handling current node
	I0717 01:01:07.429576       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:07.429631       1 main.go:303] handling current node
	I0717 01:01:17.436959       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:17.437058       1 main.go:303] handling current node
	I0717 01:01:27.427931       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:27.428041       1 main.go:303] handling current node
	I0717 01:01:37.430670       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:37.430788       1 main.go:303] handling current node
	I0717 01:01:47.434836       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:47.434871       1 main.go:303] handling current node
	I0717 01:01:57.437260       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:57.437365       1 main.go:303] handling current node
	I0717 01:02:07.429503       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:07.429566       1 main.go:303] handling current node
	I0717 01:02:17.433878       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:17.434077       1 main.go:303] handling current node
	I0717 01:02:27.428665       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:27.428785       1 main.go:303] handling current node
	I0717 01:02:37.428541       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:37.428918       1 main.go:303] handling current node
	I0717 01:02:47.427782       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:47.428156       1 main.go:303] handling current node
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:03.499730       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 00:46:03.671394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:19.810929       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:19.834028       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:20.270902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.270997       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:46:20.279704       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.756683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.791759ms"
	I0717 00:46:20.809935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.984319ms"
	I0717 00:46:20.810136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.601µs"
	I0717 00:46:20.810666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="259.402µs"
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:58:07 ha-339000 kubelet[2368]: E0717 00:58:07.786948    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:58:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:58:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:58:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:58:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:59:07 ha-339000 kubelet[2368]: E0717 00:59:07.787001    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:59:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:00:07 ha-339000 kubelet[2368]: E0717 01:00:07.786832    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:00:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:01:07 ha-339000 kubelet[2368]: E0717 01:01:07.786282    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:01:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:02:07 ha-339000 kubelet[2368]: E0717 01:02:07.785363    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:02:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7cb40bd8f4a4] <==
	I0717 00:46:42.153764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:46:42.175980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:46:42.177529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:46:42.200238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:46:42.200622       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	I0717 00:46:42.204971       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8dc52c6-8b6a-4e66-9d75-dd4099bee1cb", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a became leader
	I0717 00:46:42.301686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:02:44.826420   11236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.1842371s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m28s (x4 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-8tbsm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b69p9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b69p9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m28s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (755.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (46.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- sh -c "ping -c 1 172.27.160.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-2lw5c -- sh -c "ping -c 1 172.27.160.1": exit status 1 (10.4620864s)

                                                
                                                
-- stdout --
	PING 172.27.160.1 (172.27.160.1): 56 data bytes
	
	--- 172.27.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:03:07.497696   10520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.160.1) from pod (busybox-fc5497c4f-2lw5c): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-7zvzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (371.7798ms)

                                                
                                                
** stderr ** 
	W0716 18:03:17.975355    2100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-7zvzh does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-7zvzh could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-339000 -- exec busybox-fc5497c4f-8tbsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (357.8186ms)

                                                
                                                
** stderr ** 
	W0716 18:03:18.349635    9628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-8tbsm does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-8tbsm could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.3482091s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.703348s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              17 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         17 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     17 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         17 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         17 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         17 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:03:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:00:54 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.668916] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:00.089951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d327875f867c6209 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.089978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d327875f867c6209 elected leader d327875f867c6209 at term 2"}
	{"level":"info","ts":"2024-07-17T00:46:00.101952Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.122119Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d327875f867c6209","local-member-attributes":"{Name:ha-339000 ClientURLs:[https://172.27.164.29:2379]}","request-path":"/0/members/d327875f867c6209/attributes","cluster-id":"afb8b16c14f756c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:46:00.122495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.122581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:46:00.13562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.135705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:46:00.168688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb8b16c14f756c4","local-member-id":"d327875f867c6209","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.168948Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.1787Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	
	
	==> kernel <==
	 01:03:39 up 19 min,  0 users,  load average: 0.94, 0.58, 0.42
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:01:37.430788       1 main.go:303] handling current node
	I0717 01:01:47.434836       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:47.434871       1 main.go:303] handling current node
	I0717 01:01:57.437260       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:01:57.437365       1 main.go:303] handling current node
	I0717 01:02:07.429503       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:07.429566       1 main.go:303] handling current node
	I0717 01:02:17.433878       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:17.434077       1 main.go:303] handling current node
	I0717 01:02:27.428665       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:27.428785       1 main.go:303] handling current node
	I0717 01:02:37.428541       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:37.428918       1 main.go:303] handling current node
	I0717 01:02:47.427782       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:47.428156       1 main.go:303] handling current node
	I0717 01:02:57.427803       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:02:57.427981       1 main.go:303] handling current node
	I0717 01:03:07.431378       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:03:07.431591       1 main.go:303] handling current node
	I0717 01:03:17.433368       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:03:17.433560       1 main.go:303] handling current node
	I0717 01:03:27.427693       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:03:27.427830       1 main.go:303] handling current node
	I0717 01:03:37.430683       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:03:37.430876       1 main.go:303] handling current node
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:19.810929       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:19.834028       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:46:20.270902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.270997       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:46:20.279704       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:46:20.756683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.791759ms"
	I0717 00:46:20.809935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.984319ms"
	I0717 00:46:20.810136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.601µs"
	I0717 00:46:20.810666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="259.402µs"
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:59:07 ha-339000 kubelet[2368]: E0717 00:59:07.787001    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:59:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:59:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:00:07 ha-339000 kubelet[2368]: E0717 01:00:07.786832    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:00:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:00:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:01:07 ha-339000 kubelet[2368]: E0717 01:01:07.786282    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:01:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:01:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:02:07 ha-339000 kubelet[2368]: E0717 01:02:07.785363    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:02:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:02:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:03:07 ha-339000 kubelet[2368]: E0717 01:03:07.799313    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:03:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:03:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:03:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:03:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7cb40bd8f4a4] <==
	I0717 00:46:42.153764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:46:42.175980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:46:42.177529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:46:42.200238       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:46:42.200622       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	I0717 00:46:42.204971       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8dc52c6-8b6a-4e66-9d75-dd4099bee1cb", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a became leader
	I0717 00:46:42.301686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-339000_1b740c47-9b18-43c2-beed-040e32db3f5a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:03:31.047301    8656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.1754566s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh busybox-fc5497c4f-8tbsm:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m14s (x4 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-8tbsm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b69p9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b69p9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m14s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (46.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (279.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-339000 -v=7 --alsologtostderr
E0716 18:04:00.799046    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:06:05.795096    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-339000 -v=7 --alsologtostderr: (3m30.3570577s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: exit status 2 (35.484896s)

                                                
                                                
-- stdout --
	ha-339000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-339000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-339000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:07:23.365720    2700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 18:07:23.373751    2700 out.go:291] Setting OutFile to fd 612 ...
	I0716 18:07:23.374736    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:07:23.374736    2700 out.go:304] Setting ErrFile to fd 944...
	I0716 18:07:23.374736    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:07:23.388730    2700 out.go:298] Setting JSON to false
	I0716 18:07:23.388730    2700 mustload.go:65] Loading cluster: ha-339000
	I0716 18:07:23.388730    2700 notify.go:220] Checking for updates...
	I0716 18:07:23.389739    2700 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:07:23.389739    2700 status.go:255] checking status of ha-339000 ...
	I0716 18:07:23.390737    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:07:25.554554    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:25.554796    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:25.554796    2700 status.go:330] ha-339000 host status = "Running" (err=<nil>)
	I0716 18:07:25.554918    2700 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:07:25.555526    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:07:27.682601    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:27.682601    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:27.683047    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:30.210413    2700 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:07:30.210413    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:30.210590    2700 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:07:30.224368    2700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:07:30.224368    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:07:32.294185    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:32.294347    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:32.294447    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:34.922002    2700 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:07:34.922002    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:34.922338    2700 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 18:07:35.022103    2700 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7972957s)
	I0716 18:07:35.036245    2700 ssh_runner.go:195] Run: systemctl --version
	I0716 18:07:35.068448    2700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:07:35.099522    2700 kubeconfig.go:125] found "ha-339000" server: "https://172.27.175.254:8443"
	I0716 18:07:35.099630    2700 api_server.go:166] Checking apiserver status ...
	I0716 18:07:35.113043    2700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:07:35.155132    2700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	W0716 18:07:35.172570    2700 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 18:07:35.187416    2700 ssh_runner.go:195] Run: ls
	I0716 18:07:35.195000    2700 api_server.go:253] Checking apiserver healthz at https://172.27.175.254:8443/healthz ...
	I0716 18:07:35.202256    2700 api_server.go:279] https://172.27.175.254:8443/healthz returned 200:
	ok
	I0716 18:07:35.202256    2700 status.go:422] ha-339000 apiserver status = Running (err=<nil>)
	I0716 18:07:35.202735    2700 status.go:257] ha-339000 status: &{Name:ha-339000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:07:35.202781    2700 status.go:255] checking status of ha-339000-m02 ...
	I0716 18:07:35.203721    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:07:37.314601    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:37.314633    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:37.314741    2700 status.go:330] ha-339000-m02 host status = "Running" (err=<nil>)
	I0716 18:07:37.314830    2700 host.go:66] Checking if "ha-339000-m02" exists ...
	I0716 18:07:37.315541    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:07:39.439695    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:39.440453    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:39.440632    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:42.003916    2700 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 18:07:42.003916    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:42.003916    2700 host.go:66] Checking if "ha-339000-m02" exists ...
	I0716 18:07:42.016165    2700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:07:42.016165    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:07:44.119646    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:44.119646    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:44.119646    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:46.617385    2700 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 18:07:46.617385    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:46.617608    2700 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 18:07:46.721566    2700 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7052774s)
	I0716 18:07:46.734308    2700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:07:46.764471    2700 kubeconfig.go:125] found "ha-339000" server: "https://172.27.175.254:8443"
	I0716 18:07:46.764471    2700 api_server.go:166] Checking apiserver status ...
	I0716 18:07:46.777159    2700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0716 18:07:46.801244    2700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0716 18:07:46.801244    2700 status.go:422] ha-339000-m02 apiserver status = Stopped (err=<nil>)
	I0716 18:07:46.801244    2700 status.go:257] ha-339000-m02 status: &{Name:ha-339000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:07:46.801244    2700 status.go:255] checking status of ha-339000-m03 ...
	I0716 18:07:46.802122    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:07:48.955066    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:48.955244    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:48.955244    2700 status.go:330] ha-339000-m03 host status = "Running" (err=<nil>)
	I0716 18:07:48.955244    2700 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:07:48.956089    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:07:51.134904    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:51.135087    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:51.135087    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:53.724698    2700 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:07:53.724698    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:53.724950    2700 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:07:53.737313    2700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:07:53.737313    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:07:55.940250    2700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:07:55.940437    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:55.940493    2700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:07:58.570343    2700 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:07:58.570343    2700 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:07:58.571428    2700 sshutil.go:53] new ssh client: &{IP:172.27.164.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m03\id_rsa Username:docker}
	I0716 18:07:58.672445    2700 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9351143s)
	I0716 18:07:58.685018    2700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:07:58.710163    2700 status.go:257] ha-339000-m03 status: &{Name:ha-339000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.0268075s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.2857464s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              21 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         21 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     22 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         22 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         22 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         22 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:08:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                21m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      88s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  NodeHasSufficientMemory  88s (x2 over 88s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x2 over 88s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x2 over 88s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                57s                kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:08:18 up 24 min,  0 users,  load average: 0.07, 0.32, 0.35
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:07:17.437703       1 main.go:303] handling current node
	I0717 01:07:27.428542       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:07:27.428599       1 main.go:303] handling current node
	I0717 01:07:27.428617       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:07:27.428624       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:07:37.436948       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:07:37.437071       1 main.go:303] handling current node
	I0717 01:07:37.437108       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:07:37.437117       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:07:47.433159       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:07:47.433288       1 main.go:303] handling current node
	I0717 01:07:47.433307       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:07:47.433316       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:07:57.429149       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:07:57.429595       1 main.go:303] handling current node
	I0717 01:07:57.429908       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:07:57.430319       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:07.436636       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:07.436758       1 main.go:303] handling current node
	I0717 01:08:07.436780       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:07.436788       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:17.436847       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:17.436900       1 main.go:303] handling current node
	I0717 01:08:17.436936       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:17.436943       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:04:07 ha-339000 kubelet[2368]: E0717 01:04:07.788225    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:04:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:04:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:04:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:04:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:05:07 ha-339000 kubelet[2368]: E0717 01:05:07.787517    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:05:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:06:07 ha-339000 kubelet[2368]: E0717 01:06:07.791360    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:07:07 ha-339000 kubelet[2368]: E0717 01:07:07.802131    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:08:07 ha-339000 kubelet[2368]: E0717 01:08:07.786256    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:08:10.875240    4908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.0845169s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m54s (x5 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  60s (x2 over 71s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (279.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (52.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.8831993s)
ha_test.go:304: expected profile "ha-339000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-339000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-339000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-339000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.175.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.164.29\",\"Port\":8443,\"KubernetesVersion
\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.165.29\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.164.48\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\"
:false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
ha_test.go:307: expected profile "ha-339000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-339000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-339000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-339000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.175.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.164.29\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.165.29\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.164.48\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":
false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"Dis
ableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
E0716 18:09:00.803874    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.0531366s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.4765483s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              22 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         22 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     23 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         23 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         23 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         23 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:09:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                22m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:09:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m21s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s (x2 over 2m21s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x2 over 2m21s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x2 over 2m21s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                110s                   kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:09:11 up 25 min,  0 users,  load average: 0.24, 0.32, 0.34
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:08:07.436788       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:17.436847       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:17.436900       1 main.go:303] handling current node
	I0717 01:08:17.436936       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:17.436943       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:27.427946       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:27.428622       1 main.go:303] handling current node
	I0717 01:08:27.428731       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:27.428744       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:37.434504       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:37.434618       1 main.go:303] handling current node
	I0717 01:08:37.434639       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:37.434647       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:47.433361       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:47.433613       1 main.go:303] handling current node
	I0717 01:08:47.433633       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:47.433642       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:08:57.427811       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:08:57.427953       1 main.go:303] handling current node
	I0717 01:08:57.428023       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:08:57.428037       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:09:07.437023       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:09:07.437564       1 main.go:303] handling current node
	I0717 01:09:07.437768       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:09:07.437782       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:05:07 ha-339000 kubelet[2368]: E0717 01:05:07.787517    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:05:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:05:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:06:07 ha-339000 kubelet[2368]: E0717 01:06:07.791360    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:07:07 ha-339000 kubelet[2368]: E0717 01:07:07.802131    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:08:07 ha-339000 kubelet[2368]: E0717 01:08:07.786256    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:09:07 ha-339000 kubelet[2368]: E0717 01:09:07.787105    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:09:03.485733   15212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.1390696s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m47s (x5 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  113s (x2 over 2m4s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (52.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (70.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status --output json -v=7 --alsologtostderr: exit status 2 (35.8028165s)

                                                
                                                
-- stdout --
	[{"Name":"ha-339000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-339000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-339000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:09:25.174945    1936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 18:09:25.182506    1936 out.go:291] Setting OutFile to fd 844 ...
	I0716 18:09:25.182506    1936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:09:25.182506    1936 out.go:304] Setting ErrFile to fd 952...
	I0716 18:09:25.183493    1936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:09:25.196743    1936 out.go:298] Setting JSON to true
	I0716 18:09:25.197749    1936 mustload.go:65] Loading cluster: ha-339000
	I0716 18:09:25.197749    1936 notify.go:220] Checking for updates...
	I0716 18:09:25.198075    1936 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:09:25.198075    1936 status.go:255] checking status of ha-339000 ...
	I0716 18:09:25.199256    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:09:27.342372    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:27.342372    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:27.342969    1936 status.go:330] ha-339000 host status = "Running" (err=<nil>)
	I0716 18:09:27.342969    1936 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:09:27.344125    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:09:29.491887    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:29.491887    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:29.491887    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:09:32.077292    1936 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:09:32.077981    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:32.077981    1936 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:09:32.092278    1936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:09:32.092278    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:09:34.219333    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:34.219333    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:34.219951    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:09:36.767811    1936 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:09:36.767811    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:36.768159    1936 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 18:09:36.873417    1936 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7811216s)
	I0716 18:09:36.886178    1936 ssh_runner.go:195] Run: systemctl --version
	I0716 18:09:36.912021    1936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:09:36.943446    1936 kubeconfig.go:125] found "ha-339000" server: "https://172.27.175.254:8443"
	I0716 18:09:36.943446    1936 api_server.go:166] Checking apiserver status ...
	I0716 18:09:36.956404    1936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:09:36.996439    1936 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	W0716 18:09:37.014673    1936 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 18:09:37.028444    1936 ssh_runner.go:195] Run: ls
	I0716 18:09:37.035444    1936 api_server.go:253] Checking apiserver healthz at https://172.27.175.254:8443/healthz ...
	I0716 18:09:37.042592    1936 api_server.go:279] https://172.27.175.254:8443/healthz returned 200:
	ok
	I0716 18:09:37.042592    1936 status.go:422] ha-339000 apiserver status = Running (err=<nil>)
	I0716 18:09:37.042592    1936 status.go:257] ha-339000 status: &{Name:ha-339000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:09:37.042592    1936 status.go:255] checking status of ha-339000-m02 ...
	I0716 18:09:37.042592    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:09:39.182042    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:39.182042    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:39.182042    1936 status.go:330] ha-339000-m02 host status = "Running" (err=<nil>)
	I0716 18:09:39.182042    1936 host.go:66] Checking if "ha-339000-m02" exists ...
	I0716 18:09:39.183256    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:09:41.371390    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:41.371962    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:41.371962    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:09:43.919710    1936 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 18:09:43.919892    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:43.919977    1936 host.go:66] Checking if "ha-339000-m02" exists ...
	I0716 18:09:43.934302    1936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:09:43.934302    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:09:46.047228    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:46.047228    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:46.047349    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:09:48.587593    1936 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 18:09:48.587805    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:48.588027    1936 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 18:09:48.687196    1936 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.752667s)
	I0716 18:09:48.700929    1936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:09:48.729670    1936 kubeconfig.go:125] found "ha-339000" server: "https://172.27.175.254:8443"
	I0716 18:09:48.729670    1936 api_server.go:166] Checking apiserver status ...
	I0716 18:09:48.743771    1936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0716 18:09:48.766286    1936 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0716 18:09:48.766286    1936 status.go:422] ha-339000-m02 apiserver status = Stopped (err=<nil>)
	I0716 18:09:48.766286    1936 status.go:257] ha-339000-m02 status: &{Name:ha-339000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:09:48.766286    1936 status.go:255] checking status of ha-339000-m03 ...
	I0716 18:09:48.767202    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:09:50.945513    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:50.946524    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:50.946581    1936 status.go:330] ha-339000-m03 host status = "Running" (err=<nil>)
	I0716 18:09:50.946630    1936 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:09:50.947710    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:09:53.175800    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:53.176388    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:53.176533    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:09:55.808693    1936 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:09:55.809654    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:55.809740    1936 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:09:55.824299    1936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:09:55.824299    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:09:58.034610    1936 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:09:58.034752    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:09:58.034826    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:10:00.686095    1936 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:10:00.686095    1936 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:10:00.686685    1936 sshutil.go:53] new ssh client: &{IP:172.27.164.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m03\id_rsa Username:docker}
	I0716 18:10:00.795101    1936 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9707841s)
	I0716 18:10:00.810189    1936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:10:00.834913    1936 status.go:257] ha-339000-m03 status: &{Name:ha-339000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-339000 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.458844s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.76391s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         23 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              23 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         24 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     24 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         24 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         24 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         24 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         24 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:10:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:05:59 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                23m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m31s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m31s (x2 over 3m31s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m31s (x2 over 3m31s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m31s (x2 over 3m31s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:00.177863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:46:00.178494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.164.29:2379"}
	2024/07/17 00:46:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:46:25.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.692505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:46:25.609927Z","caller":"traceutil/trace.go:171","msg":"trace[679487781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"186.853306ms","start":"2024-07-17T00:46:25.42306Z","end":"2024-07-17T00:46:25.609913Z","steps":["trace[679487781] 'range keys from in-memory index tree'  (duration: 186.648105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:39.20998Z","caller":"traceutil/trace.go:171","msg":"trace[678298741] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"177.148603ms","start":"2024-07-17T00:46:39.032813Z","end":"2024-07-17T00:46:39.209962Z","steps":["trace[678298741] 'process raft request'  (duration: 176.996702ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:10:21 up 26 min,  0 users,  load average: 0.22, 0.28, 0.33
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:09:17.434828       1 main.go:303] handling current node
	I0717 01:09:27.427810       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:09:27.427920       1 main.go:303] handling current node
	I0717 01:09:27.428056       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:09:27.428390       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:09:37.436775       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:09:37.436866       1 main.go:303] handling current node
	I0717 01:09:37.437094       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:09:37.437111       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:09:47.431938       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:09:47.432016       1 main.go:303] handling current node
	I0717 01:09:47.432041       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:09:47.432054       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:09:57.428731       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:09:57.428848       1 main.go:303] handling current node
	I0717 01:09:57.428869       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:09:57.428877       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:10:07.434683       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:10:07.434975       1 main.go:303] handling current node
	I0717 01:10:07.435090       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:10:07.435150       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:10:17.436776       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:10:17.436895       1 main.go:303] handling current node
	I0717 01:10:17.436917       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:10:17.436931       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:06:07 ha-339000 kubelet[2368]: E0717 01:06:07.791360    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:06:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:06:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:07:07 ha-339000 kubelet[2368]: E0717 01:07:07.802131    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:08:07 ha-339000 kubelet[2368]: E0717 01:08:07.786256    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:09:07 ha-339000 kubelet[2368]: E0717 01:09:07.787105    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:10:07 ha-339000 kubelet[2368]: E0717 01:10:07.789022    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:10:13.452818    8180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.2465925s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  4m57s (x5 over 20m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  3m3s (x2 over 3m14s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (70.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (95.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 node stop m02 -v=7 --alsologtostderr
E0716 18:11:05.786025    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 node stop m02 -v=7 --alsologtostderr: (36.3223361s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: exit status 7 (26.0382677s)

                                                
                                                
-- stdout --
	ha-339000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-339000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-339000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:11:11.878930    3812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 18:11:11.885823    3812 out.go:291] Setting OutFile to fd 648 ...
	I0716 18:11:11.886870    3812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:11:11.886870    3812 out.go:304] Setting ErrFile to fd 252...
	I0716 18:11:11.886870    3812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:11:11.903985    3812 out.go:298] Setting JSON to false
	I0716 18:11:11.903985    3812 mustload.go:65] Loading cluster: ha-339000
	I0716 18:11:11.903985    3812 notify.go:220] Checking for updates...
	I0716 18:11:11.904876    3812 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:11:11.904876    3812 status.go:255] checking status of ha-339000 ...
	I0716 18:11:11.906347    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:11:14.086110    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:14.086110    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:14.086110    3812 status.go:330] ha-339000 host status = "Running" (err=<nil>)
	I0716 18:11:14.086409    3812 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:11:14.087150    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:11:16.238140    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:16.238140    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:16.238586    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:11:18.904480    3812 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:11:18.904480    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:18.904617    3812 host.go:66] Checking if "ha-339000" exists ...
	I0716 18:11:18.917007    3812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:11:18.917007    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 18:11:21.059501    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:21.059501    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:21.059845    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 18:11:23.610651    3812 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 18:11:23.610651    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:23.611202    3812 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 18:11:23.719791    3812 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8027665s)
	I0716 18:11:23.733688    3812 ssh_runner.go:195] Run: systemctl --version
	I0716 18:11:23.758364    3812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:11:23.785636    3812 kubeconfig.go:125] found "ha-339000" server: "https://172.27.175.254:8443"
	I0716 18:11:23.785636    3812 api_server.go:166] Checking apiserver status ...
	I0716 18:11:23.798632    3812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:11:23.839281    3812 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	W0716 18:11:23.858952    3812 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 18:11:23.871880    3812 ssh_runner.go:195] Run: ls
	I0716 18:11:23.878862    3812 api_server.go:253] Checking apiserver healthz at https://172.27.175.254:8443/healthz ...
	I0716 18:11:23.888550    3812 api_server.go:279] https://172.27.175.254:8443/healthz returned 200:
	ok
	I0716 18:11:23.888550    3812 status.go:422] ha-339000 apiserver status = Running (err=<nil>)
	I0716 18:11:23.888550    3812 status.go:257] ha-339000 status: &{Name:ha-339000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:11:23.888667    3812 status.go:255] checking status of ha-339000-m02 ...
	I0716 18:11:23.894299    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:11:25.980500    3812 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 18:11:25.980500    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:25.980806    3812 status.go:330] ha-339000-m02 host status = "Stopped" (err=<nil>)
	I0716 18:11:25.980862    3812 status.go:343] host is not running, skipping remaining checks
	I0716 18:11:25.980862    3812 status.go:257] ha-339000-m02 status: &{Name:ha-339000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 18:11:25.980862    3812 status.go:255] checking status of ha-339000-m03 ...
	I0716 18:11:25.981731    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:11:28.148470    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:28.148801    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:28.148801    3812 status.go:330] ha-339000-m03 host status = "Running" (err=<nil>)
	I0716 18:11:28.148801    3812 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:11:28.149787    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:11:30.345214    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:30.345214    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:30.346123    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:11:32.892251    3812 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:11:32.892251    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:32.892419    3812 host.go:66] Checking if "ha-339000-m03" exists ...
	I0716 18:11:32.905807    3812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 18:11:32.905807    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m03 ).state
	I0716 18:11:35.029801    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:11:35.029801    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:35.029801    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 18:11:37.623119    3812 main.go:141] libmachine: [stdout =====>] : 172.27.164.48
	
	I0716 18:11:37.623119    3812 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:11:37.623718    3812 sshutil.go:53] new ssh client: &{IP:172.27.164.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m03\id_rsa Username:docker}
	I0716 18:11:37.723542    3812 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8177168s)
	I0716 18:11:37.735898    3812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:11:37.775249    3812 status.go:257] ha-339000-m03 status: &{Name:ha-339000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr": ha-339000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-339000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-339000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr": ha-339000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-339000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-339000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr": ha-339000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-339000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-339000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr": ha-339000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-339000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-339000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.0902213s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.2664666s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-339000 node stop m02 -v=7         | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:10 PDT | 16 Jul 24 18:11 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         25 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         25 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         25 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              25 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         25 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     25 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         25 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         25 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         25 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:11:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                25m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:11:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x2 over 5m7s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x2 over 5m7s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x2 over 5m7s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                4m36s                kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:11:01.612897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2587}
	{"level":"info","ts":"2024-07-17T01:11:01.626116Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2587,"took":"12.501801ms","hash":3224311936,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1982464,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-17T01:11:01.626215Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3224311936,"revision":2587,"compact-revision":2051}
	{"level":"info","ts":"2024-07-17T01:11:07.411618Z","caller":"traceutil/trace.go:171","msg":"trace[1857286762] transaction","detail":"{read_only:false; response_revision:3223; number_of_response:1; }","duration":"111.812009ms","start":"2024-07-17T01:11:07.299785Z","end":"2024-07-17T01:11:07.411597Z","steps":["trace[1857286762] 'process raft request'  (duration: 111.694809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:11:07.564647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.541611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:11:07.564832Z","caller":"traceutil/trace.go:171","msg":"trace[1937676543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3223; }","duration":"141.741911ms","start":"2024-07-17T01:11:07.423051Z","end":"2024-07-17T01:11:07.564793Z","steps":["trace[1937676543] 'range keys from in-memory index tree'  (duration: 141.472411ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:11:57 up 28 min,  0 users,  load average: 0.20, 0.25, 0.31
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:10:57.428527       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:07.430394       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:07.430549       1 main.go:303] handling current node
	I0717 01:11:07.430687       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:07.430786       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:17.433418       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:17.433903       1 main.go:303] handling current node
	I0717 01:11:17.434137       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:17.434165       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:27.427845       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:27.428245       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:27.428491       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:27.428507       1 main.go:303] handling current node
	I0717 01:11:37.434326       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:37.434362       1 main.go:303] handling current node
	I0717 01:11:37.434379       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:37.434385       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:47.433815       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:47.433949       1 main.go:303] handling current node
	I0717 01:11:47.433970       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:47.433979       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:57.428788       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:57.428939       1 main.go:303] handling current node
	I0717 01:11:57.428973       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:57.428997       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:07:07 ha-339000 kubelet[2368]: E0717 01:07:07.802131    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:07:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:07:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:08:07 ha-339000 kubelet[2368]: E0717 01:08:07.786256    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:09:07 ha-339000 kubelet[2368]: E0717 01:09:07.787105    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:10:07 ha-339000 kubelet[2368]: E0717 01:10:07.789022    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:11:07 ha-339000 kubelet[2368]: E0717 01:11:07.791070    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:11:50.018742    4344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.0732604s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  6m33s (x5 over 21m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4m39s (x2 over 4m50s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (95.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (45.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.7050676s)
ha_test.go:413: expected profile "ha-339000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-339000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-339000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-339000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.175.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.164.29\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.165.29\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.164.48\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin
\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"
DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.0719898s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.2837858s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-339000 node stop m02 -v=7         | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:10 PDT | 16 Jul 24 18:11 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              26 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         26 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     26 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         26 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         26 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         26 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         26 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:12:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:12:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:07:52 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                5m21s                  kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:11:01.612897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2587}
	{"level":"info","ts":"2024-07-17T01:11:01.626116Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2587,"took":"12.501801ms","hash":3224311936,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1982464,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-17T01:11:01.626215Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3224311936,"revision":2587,"compact-revision":2051}
	{"level":"info","ts":"2024-07-17T01:11:07.411618Z","caller":"traceutil/trace.go:171","msg":"trace[1857286762] transaction","detail":"{read_only:false; response_revision:3223; number_of_response:1; }","duration":"111.812009ms","start":"2024-07-17T01:11:07.299785Z","end":"2024-07-17T01:11:07.411597Z","steps":["trace[1857286762] 'process raft request'  (duration: 111.694809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:11:07.564647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.541611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:11:07.564832Z","caller":"traceutil/trace.go:171","msg":"trace[1937676543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3223; }","duration":"141.741911ms","start":"2024-07-17T01:11:07.423051Z","end":"2024-07-17T01:11:07.564793Z","steps":["trace[1937676543] 'range keys from in-memory index tree'  (duration: 141.472411ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:12:43 up 28 min,  0 users,  load average: 0.28, 0.26, 0.31
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:11:37.434385       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:47.433815       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:47.433949       1 main.go:303] handling current node
	I0717 01:11:47.433970       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:47.433979       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:11:57.428788       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:11:57.428939       1 main.go:303] handling current node
	I0717 01:11:57.428973       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:11:57.428997       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:12:07.437274       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:12:07.437416       1 main.go:303] handling current node
	I0717 01:12:07.437479       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:12:07.437491       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:12:17.433532       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:12:17.433669       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:12:17.434145       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:12:17.434177       1 main.go:303] handling current node
	I0717 01:12:27.428416       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:12:27.428620       1 main.go:303] handling current node
	I0717 01:12:27.428654       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:12:27.428830       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:12:37.436552       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:12:37.436630       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:12:37.436985       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:12:37.437017       1 main.go:303] handling current node
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:08:07 ha-339000 kubelet[2368]: E0717 01:08:07.786256    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:08:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:08:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:09:07 ha-339000 kubelet[2368]: E0717 01:09:07.787105    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:09:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:09:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:10:07 ha-339000 kubelet[2368]: E0717 01:10:07.789022    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:11:07 ha-339000 kubelet[2368]: E0717 01:11:07.791070    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:12:07 ha-339000 kubelet[2368]: E0717 01:12:07.787135    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:12:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:12:35.210781    4960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (11.8885767s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m18s (x5 over 22m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18s (x3 over 5m35s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (45.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (86.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 node start m02 -v=7 --alsologtostderr: exit status 1 (6.0843907s)

                                                
                                                
-- stdout --
	* Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	* Restarting existing hyperv VM for "ha-339000-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:12:56.455584   15108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 18:12:56.464044   15108 out.go:291] Setting OutFile to fd 1016 ...
	I0716 18:12:56.484636   15108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:12:56.484636   15108 out.go:304] Setting ErrFile to fd 248...
	I0716 18:12:56.484636   15108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:12:56.500374   15108 mustload.go:65] Loading cluster: ha-339000
	I0716 18:12:56.501552   15108 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:12:56.501957   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:12:58.612528   15108 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 18:12:58.612528   15108 main.go:141] libmachine: [stderr =====>] : 
	W0716 18:12:58.612528   15108 host.go:58] "ha-339000-m02" host status: Stopped
	I0716 18:12:58.619575   15108 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 18:12:58.622036   15108 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:12:58.622646   15108 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:12:58.622646   15108 cache.go:56] Caching tarball of preloaded images
	I0716 18:12:58.623183   15108 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:12:58.623454   15108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:12:58.623517   15108 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 18:12:58.625677   15108 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:12:58.625677   15108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 18:12:58.626205   15108 start.go:96] Skipping create...Using existing machine configuration
	I0716 18:12:58.626300   15108 fix.go:54] fixHost starting: m02
	I0716 18:12:58.626433   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 18:13:00.737820   15108 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 18:13:00.738784   15108 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:13:00.738820   15108 fix.go:112] recreateIfNeeded on ha-339000-m02: state=Stopped err=<nil>
	W0716 18:13:00.738941   15108 fix.go:138] unexpected machine state, will restart: <nil>
	I0716 18:13:00.742668   15108 out.go:177] * Restarting existing hyperv VM for "ha-339000-m02" ...
	I0716 18:13:00.747055   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02

                                                
                                                
** /stderr **
ha_test.go:422: W0716 18:12:56.455584   15108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 18:12:56.464044   15108 out.go:291] Setting OutFile to fd 1016 ...
I0716 18:12:56.484636   15108 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 18:12:56.484636   15108 out.go:304] Setting ErrFile to fd 248...
I0716 18:12:56.484636   15108 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 18:12:56.500374   15108 mustload.go:65] Loading cluster: ha-339000
I0716 18:12:56.501552   15108 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 18:12:56.501957   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
I0716 18:12:58.612528   15108 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0716 18:12:58.612528   15108 main.go:141] libmachine: [stderr =====>] : 
W0716 18:12:58.612528   15108 host.go:58] "ha-339000-m02" host status: Stopped
I0716 18:12:58.619575   15108 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
I0716 18:12:58.622036   15108 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0716 18:12:58.622646   15108 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0716 18:12:58.622646   15108 cache.go:56] Caching tarball of preloaded images
I0716 18:12:58.623183   15108 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0716 18:12:58.623454   15108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0716 18:12:58.623517   15108 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
I0716 18:12:58.625677   15108 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0716 18:12:58.625677   15108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
I0716 18:12:58.626205   15108 start.go:96] Skipping create...Using existing machine configuration
I0716 18:12:58.626300   15108 fix.go:54] fixHost starting: m02
I0716 18:12:58.626433   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
I0716 18:13:00.737820   15108 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0716 18:13:00.738784   15108 main.go:141] libmachine: [stderr =====>] : 
I0716 18:13:00.738820   15108 fix.go:112] recreateIfNeeded on ha-339000-m02: state=Stopped err=<nil>
W0716 18:13:00.738941   15108 fix.go:138] unexpected machine state, will restart: <nil>
I0716 18:13:00.742668   15108 out.go:177] * Restarting existing hyperv VM for "ha-339000-m02" ...
I0716 18:13:00.747055   15108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-339000 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (98.7µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
E0716 18:13:44.024291    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-339000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000
E0716 18:14:00.807525    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-339000 -n ha-339000: (12.1432411s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-339000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-339000 logs -n 25: (8.336485s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:00 PDT | 16 Jul 24 18:00 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:01 PDT | 16 Jul 24 18:01 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT | 16 Jul 24 18:02 PDT |
	|         | busybox-fc5497c4f-2lw5c -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:02 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- get pods -o          | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:03 PDT |
	|         | busybox-fc5497c4f-2lw5c              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-2lw5c -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-7zvzh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-339000 -- exec                 | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT |                     |
	|         | busybox-fc5497c4f-8tbsm              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-339000 -v=7                | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:03 PDT | 16 Jul 24 18:07 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-339000 node stop m02 -v=7         | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:10 PDT | 16 Jul 24 18:11 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-339000 node start m02 -v=7        | ha-339000 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:12 PDT |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:43:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:43:02.511657    3116 out.go:291] Setting OutFile to fd 724 ...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.512326    3116 out.go:304] Setting ErrFile to fd 828...
	I0716 17:43:02.512326    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:43:02.533555    3116 out.go:298] Setting JSON to false
	I0716 17:43:02.537630    3116 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18621,"bootTime":1721158360,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:43:02.537705    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:43:02.544475    3116 out.go:177] * [ha-339000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:43:02.549507    3116 notify.go:220] Checking for updates...
	I0716 17:43:02.551930    3116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:43:02.555630    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:43:02.558820    3116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:43:02.561747    3116 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:43:02.564654    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:43:02.567370    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:43:07.707782    3116 out.go:177] * Using the hyperv driver based on user configuration
	I0716 17:43:07.712395    3116 start.go:297] selected driver: hyperv
	I0716 17:43:07.712395    3116 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:43:07.712395    3116 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 17:43:07.764290    3116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:43:07.765868    3116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 17:43:07.765868    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:43:07.765960    3116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 17:43:07.766008    3116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 17:43:07.766045    3116 start.go:340] cluster config:
	{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:43:07.766045    3116 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:43:07.770520    3116 out.go:177] * Starting "ha-339000" primary control-plane node in "ha-339000" cluster
	I0716 17:43:07.774367    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:43:07.774367    3116 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:43:07.774367    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:43:07.775474    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:43:07.775474    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:43:07.776251    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:43:07.776529    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json: {Name:mkc12069a4f250631f9bc5aa8f09094ef8a634f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:43:07.776781    3116 start.go:360] acquireMachinesLock for ha-339000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:43:07.777775    3116 start.go:364] duration metric: took 993.4µs to acquireMachinesLock for "ha-339000"
	I0716 17:43:07.778188    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:43:07.778188    3116 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 17:43:07.779428    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:43:07.779428    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:43:07.779428    3116 client.go:168] LocalClient.Create starting
	I0716 17:43:07.782101    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:43:07.782160    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:09.701162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:11.329393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:12.719177    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:16.159241    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:16.162438    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:43:16.628521    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: Creating VM...
	I0716 17:43:16.904355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:43:19.641451    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:19.641654    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:43:19.641654    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:21.285640    3116 main.go:141] libmachine: Creating VHD
	I0716 17:43:21.285640    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B7AF00A4-13CB-4472-846F-00D579689963
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:43:24.891682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:24.891682    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:43:24.891816    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:43:24.900682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:43:28.002547    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:28.003513    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd' -SizeBytes 20000MB
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:30.934256    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-339000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:43:34.500622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:34.501333    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000 -DynamicMemoryEnabled $false
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:36.646608    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:36.647419    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000 -Count 2
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:38.716673    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\boot2docker.iso'
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:41.256400    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:41.256983    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\disk.vhd'
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:43.802981    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:43.803075    3116 main.go:141] libmachine: Starting VM...
	I0716 17:43:43.803075    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:47.377738    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:43:47.378361    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:49.602625    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:52.116578    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:52.117133    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:53.130204    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:43:55.250878    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:55.251051    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:43:57.846254    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:43:58.853368    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:00.989807    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:03.433858    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:03.434348    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:04.437265    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:06.576432    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:06.577200    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:09.050275    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:44:09.050682    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:10.063395    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:12.232913    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:12.233732    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:14.788040    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:14.789393    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:16.892820    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:16.893874    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:44:16.894043    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:19.029084    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:19.029376    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:19.029558    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:21.521127    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:21.521201    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:21.526623    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:21.537644    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:21.537644    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:44:21.680155    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:44:21.680261    3116 buildroot.go:166] provisioning hostname "ha-339000"
	I0716 17:44:21.680261    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:23.781138    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:23.781877    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:26.235915    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:26.240664    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:26.240664    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:26.240664    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000 && echo "ha-339000" | sudo tee /etc/hostname
	I0716 17:44:26.408374    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000
	
	I0716 17:44:26.408938    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:28.481194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:28.481415    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:30.934756    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:30.935765    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:30.941015    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:30.941991    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:30.942112    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:44:31.103013    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:44:31.103013    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:44:31.103013    3116 buildroot.go:174] setting up certificates
	I0716 17:44:31.103013    3116 provision.go:84] configureAuth start
	I0716 17:44:31.103013    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:33.215706    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:35.687142    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:35.687352    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:37.824280    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:40.418998    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:40.418998    3116 provision.go:143] copyHostCerts
	I0716 17:44:40.419252    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:44:40.419628    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:44:40.419722    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:44:40.420233    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:44:40.421567    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:44:40.421846    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:44:40.421846    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:44:40.422063    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:44:40.423106    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:44:40.423363    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:44:40.423471    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:44:40.423633    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:44:40.424682    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000 san=[127.0.0.1 172.27.164.29 ha-339000 localhost minikube]
	I0716 17:44:40.501478    3116 provision.go:177] copyRemoteCerts
	I0716 17:44:40.515721    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:44:40.515721    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:42.714496    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:42.715319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:45.287716    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:45.287976    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:44:45.395308    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879538s)
	I0716 17:44:45.395308    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:44:45.395845    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:44:45.445298    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:44:45.445298    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0716 17:44:45.493119    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:44:45.493477    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:44:45.540034    3116 provision.go:87] duration metric: took 14.4369628s to configureAuth
	I0716 17:44:45.540034    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:44:45.540034    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:44:45.540034    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:47.656405    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:47.657416    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:47.657606    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:50.287493    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:50.293970    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:50.294780    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:50.294780    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:44:50.438690    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:44:50.438690    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:44:50.439242    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:44:50.439463    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:52.613031    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:52.613202    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:55.112583    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:55.112780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:55.118787    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:55.119603    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:55.119603    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:44:55.287849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:44:55.287849    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:44:57.327655    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:57.327749    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:44:59.771637    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:44:59.772464    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:44:59.778125    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:44:59.778350    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:44:59.778350    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:45:02.011245    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:45:02.011310    3116 machine.go:97] duration metric: took 45.1171906s to provisionDockerMachine
	I0716 17:45:02.011310    3116 client.go:171] duration metric: took 1m54.2314258s to LocalClient.Create
	I0716 17:45:02.011310    3116 start.go:167] duration metric: took 1m54.2314258s to libmachine.API.Create "ha-339000"
	I0716 17:45:02.011310    3116 start.go:293] postStartSetup for "ha-339000" (driver="hyperv")
	I0716 17:45:02.011310    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:45:02.025617    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:45:02.025617    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:04.033532    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:04.033682    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:06.459588    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:06.460165    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:06.575115    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5494051s)
	I0716 17:45:06.589509    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:45:06.596657    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:45:06.596657    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:45:06.597949    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:45:06.597949    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:45:06.609164    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:45:06.627252    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:45:06.672002    3116 start.go:296] duration metric: took 4.6606727s for postStartSetup
	I0716 17:45:06.674968    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:08.765131    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:08.765380    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:08.765497    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:11.213594    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:11.214085    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:45:11.217931    3116 start.go:128] duration metric: took 2m3.4392489s to createHost
	I0716 17:45:11.218136    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:13.345097    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:13.345521    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:13.345624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:15.807039    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:15.807251    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:15.812906    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:15.813653    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:15.813653    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177115.953724044
	
	I0716 17:45:15.948595    3116 fix.go:216] guest clock: 1721177115.953724044
	I0716 17:45:15.948595    3116 fix.go:229] Guest: 2024-07-16 17:45:15.953724044 -0700 PDT Remote: 2024-07-16 17:45:11.2180611 -0700 PDT m=+128.786700601 (delta=4.735662944s)
	I0716 17:45:15.948595    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:18.008670    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:20.478947    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:20.484999    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:45:20.485772    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.164.29 22 <nil> <nil>}
	I0716 17:45:20.485772    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177115
	I0716 17:45:20.637610    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:45:15 UTC 2024
	
	I0716 17:45:20.637610    3116 fix.go:236] clock set: Wed Jul 17 00:45:15 UTC 2024
	 (err=<nil>)
	I0716 17:45:20.637610    3116 start.go:83] releasing machines lock for "ha-339000", held for 2m12.8593042s
	I0716 17:45:20.638234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:22.707554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:22.708142    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:25.107783    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:25.107859    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:25.111724    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:45:25.112251    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:25.126162    3116 ssh_runner.go:195] Run: cat /version.json
	I0716 17:45:25.126162    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252437    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252683    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:27.252784    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.842633    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.842726    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:45:29.866500    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:45:29.867122    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:45:29.942290    3116 ssh_runner.go:235] Completed: cat /version.json: (4.8161085s)
	I0716 17:45:29.955151    3116 ssh_runner.go:195] Run: systemctl --version
	I0716 17:45:29.963183    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.850807s)
	W0716 17:45:29.963261    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:45:29.989858    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0716 17:45:30.002334    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:45:30.024455    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:45:30.060489    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:45:30.060489    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.060904    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 17:45:30.088360    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:45:30.088360    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:45:30.114896    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:45:30.150731    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:45:30.171885    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:45:30.184912    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:45:30.217702    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.252942    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:45:30.288430    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:45:30.319928    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:45:30.353694    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:45:30.385470    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:45:30.416864    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:45:30.450585    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:45:30.481697    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:45:30.512997    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:30.704931    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:45:30.737254    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:45:30.750734    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:45:30.788689    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.822648    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:45:30.874446    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:45:30.912097    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:30.952128    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:45:31.016563    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:45:31.042740    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:45:31.097374    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:45:31.118595    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:45:31.137209    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:45:31.181898    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:45:31.367167    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:45:31.535950    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:45:31.535950    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:45:31.582386    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:31.765270    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:34.356386    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5899608s)
	I0716 17:45:34.370945    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 17:45:34.411491    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:34.453125    3116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 17:45:34.646541    3116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 17:45:34.834414    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.024555    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 17:45:35.073660    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 17:45:35.110577    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:35.302754    3116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 17:45:35.404870    3116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 17:45:35.419105    3116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 17:45:35.428433    3116 start.go:563] Will wait 60s for crictl version
	I0716 17:45:35.440438    3116 ssh_runner.go:195] Run: which crictl
	I0716 17:45:35.457168    3116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 17:45:35.508992    3116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 17:45:35.520306    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.565599    3116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 17:45:35.604169    3116 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 17:45:35.604426    3116 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 17:45:35.608415    3116 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 17:45:35.611147    3116 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 17:45:35.612104    3116 ip.go:210] interface addr: 172.27.160.1/20
	I0716 17:45:35.623561    3116 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 17:45:35.630491    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:35.662981    3116 kubeadm.go:883] updating cluster {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 17:45:35.662981    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:45:35.673543    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:35.695912    3116 docker.go:685] Got preloaded images: 
	I0716 17:45:35.696081    3116 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 17:45:35.708492    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:35.739856    3116 ssh_runner.go:195] Run: which lz4
	I0716 17:45:35.746783    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 17:45:35.760321    3116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 17:45:35.767157    3116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 17:45:35.767273    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 17:45:38.011722    3116 docker.go:649] duration metric: took 2.2635945s to copy over tarball
	I0716 17:45:38.025002    3116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 17:45:46.381303    3116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3561701s)
	I0716 17:45:46.381303    3116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 17:45:46.454009    3116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 17:45:46.473968    3116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 17:45:46.519985    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:46.713524    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:45:50.394952    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6814129s)
	I0716 17:45:50.405422    3116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 17:45:50.433007    3116 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 17:45:50.433123    3116 cache_images.go:84] Images are preloaded, skipping loading
	I0716 17:45:50.433169    3116 kubeadm.go:934] updating node { 172.27.164.29 8443 v1.30.2 docker true true} ...
	I0716 17:45:50.433394    3116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-339000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.164.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 17:45:50.442695    3116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 17:45:50.478932    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:45:50.479064    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:45:50.479064    3116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 17:45:50.479064    3116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.164.29 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-339000 NodeName:ha-339000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.164.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.164.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 17:45:50.479404    3116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.164.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-339000"
	  kubeletExtraArgs:
	    node-ip: 172.27.164.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.164.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 17:45:50.479404    3116 kube-vip.go:115] generating kube-vip config ...
	I0716 17:45:50.491644    3116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0716 17:45:50.516295    3116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0716 17:45:50.516295    3116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0716 17:45:50.530360    3116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 17:45:50.546376    3116 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 17:45:50.558331    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0716 17:45:50.576216    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0716 17:45:50.606061    3116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 17:45:50.635320    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0716 17:45:50.664211    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0716 17:45:50.706502    3116 ssh_runner.go:195] Run: grep 172.27.175.254	control-plane.minikube.internal$ /etc/hosts
	I0716 17:45:50.713201    3116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 17:45:50.745878    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:45:50.932942    3116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 17:45:50.961051    3116 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000 for IP: 172.27.164.29
	I0716 17:45:50.961051    3116 certs.go:194] generating shared ca certs ...
	I0716 17:45:50.961163    3116 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:50.961988    3116 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 17:45:50.962350    3116 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 17:45:50.962488    3116 certs.go:256] generating profile certs ...
	I0716 17:45:50.962665    3116 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key
	I0716 17:45:50.963234    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt with IP's: []
	I0716 17:45:51.178866    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt ...
	I0716 17:45:51.178866    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.crt: {Name:mkd89d61973b93b04ca71461613c98415d1b9f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.180910    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key ...
	I0716 17:45:51.180910    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\client.key: {Name:mk0a579aaa829e7e40f530074e97e9919b1261db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.181483    3116 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d
	I0716 17:45:51.182488    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.164.29 172.27.175.254]
	I0716 17:45:51.429013    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d ...
	I0716 17:45:51.429013    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d: {Name:mke7c236b50094ddb9385ee31fa24cc5da9318c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430664    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d ...
	I0716 17:45:51.430664    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d: {Name:mka09a603970131d5468126ee7faf279e1eefeb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.430938    3116 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt
	I0716 17:45:51.443660    3116 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key.8c9d484d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key
	I0716 17:45:51.445360    3116 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key
	I0716 17:45:51.445360    3116 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt with IP's: []
	I0716 17:45:51.522844    3116 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt ...
	I0716 17:45:51.522844    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt: {Name:mk25d08d0bdbfc86370146fe47d07a3b52bdc710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525042    3116 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key ...
	I0716 17:45:51.525042    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key: {Name:mka4aa4f63a2bb94895757d9a70fbfbf38c01901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:45:51.525985    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 17:45:51.526509    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 17:45:51.526796    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 17:45:51.527004    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 17:45:51.527193    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 17:45:51.527474    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 17:45:51.527648    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 17:45:51.536038    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 17:45:51.536038    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 17:45:51.537093    3116 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 17:45:51.537093    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 17:45:51.538167    3116 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 17:45:51.538963    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:51.540357    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 17:45:51.591369    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 17:45:51.637324    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 17:45:51.681041    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 17:45:51.727062    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 17:45:51.773103    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 17:45:51.823727    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 17:45:51.867050    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 17:45:51.907476    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 17:45:51.947557    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 17:45:51.987685    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 17:45:52.033698    3116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 17:45:52.081106    3116 ssh_runner.go:195] Run: openssl version
	I0716 17:45:52.103130    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 17:45:52.135989    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.143040    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.156424    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 17:45:52.175752    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 17:45:52.210553    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 17:45:52.242377    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.250520    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.263123    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 17:45:52.283797    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 17:45:52.317739    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 17:45:52.354317    3116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.363253    3116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.378745    3116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 17:45:52.400594    3116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 17:45:52.438402    3116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 17:45:52.445902    3116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 17:45:52.446292    3116 kubeadm.go:392] StartCluster: {Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:45:52.456397    3116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 17:45:52.497977    3116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 17:45:52.532638    3116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 17:45:52.564702    3116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 17:45:52.584179    3116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 17:45:52.584179    3116 kubeadm.go:157] found existing configuration files:
	
	I0716 17:45:52.597395    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 17:45:52.613437    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 17:45:52.626633    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 17:45:52.657691    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 17:45:52.676289    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 17:45:52.688763    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 17:45:52.718589    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.737599    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 17:45:52.750588    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 17:45:52.781585    3116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 17:45:52.800208    3116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 17:45:52.812238    3116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 17:45:52.829242    3116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 17:45:53.296713    3116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 17:46:08.200591    3116 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 17:46:08.200773    3116 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 17:46:08.200931    3116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 17:46:08.201245    3116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 17:46:08.201618    3116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 17:46:08.201618    3116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 17:46:08.205053    3116 out.go:204]   - Generating certificates and keys ...
	I0716 17:46:08.205501    3116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 17:46:08.205606    3116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 17:46:08.205915    3116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 17:46:08.206211    3116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 17:46:08.206413    3116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 17:46:08.206487    3116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.206611    3116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 17:46:08.207214    3116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-339000 localhost] and IPs [172.27.164.29 127.0.0.1 ::1]
	I0716 17:46:08.207523    3116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 17:46:08.207758    3116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 17:46:08.208016    3116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 17:46:08.208182    3116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 17:46:08.208345    3116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 17:46:08.208905    3116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 17:46:08.209368    3116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 17:46:08.212353    3116 out.go:204]   - Booting up control plane ...
	I0716 17:46:08.212353    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 17:46:08.213367    3116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 17:46:08.213367    3116 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.847812ms
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [api-check] The API server is healthy after 9.078275025s
	I0716 17:46:08.214380    3116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 17:46:08.214975    3116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 17:46:08.214975    3116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 17:46:08.214975    3116 kubeadm.go:310] [mark-control-plane] Marking the node ha-339000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 17:46:08.214975    3116 kubeadm.go:310] [bootstrap-token] Using token: pxdanz.ukoapkuijp7tbuz4
	I0716 17:46:08.219185    3116 out.go:204]   - Configuring RBAC rules ...
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 17:46:08.219185    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 17:46:08.220247    3116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 17:46:08.220247    3116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 17:46:08.220247    3116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.220247    3116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 17:46:08.220247    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 17:46:08.221265    3116 kubeadm.go:310] 
	I0716 17:46:08.221265    3116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 17:46:08.221265    3116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 17:46:08.221265    3116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 17:46:08.222266    3116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.222266    3116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 17:46:08.222266    3116 kubeadm.go:310] 	--control-plane 
	I0716 17:46:08.222266    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 17:46:08.223284    3116 kubeadm.go:310] 
	I0716 17:46:08.223284    3116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxdanz.ukoapkuijp7tbuz4 \
	I0716 17:46:08.223284    3116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 17:46:08.223284    3116 cni.go:84] Creating CNI manager for ""
	I0716 17:46:08.223284    3116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 17:46:08.229319    3116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 17:46:08.248749    3116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 17:46:08.256943    3116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 17:46:08.257078    3116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 17:46:08.310700    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 17:46:08.994081    3116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 17:46:09.008591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.009591    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-339000 minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-339000 minikube.k8s.io/primary=true
	I0716 17:46:09.028627    3116 ops.go:34] apiserver oom_adj: -16
	I0716 17:46:09.265595    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:09.779516    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.277248    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:10.767674    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.272500    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:11.778110    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.273285    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:12.776336    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.273190    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:13.773410    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.278933    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:14.778605    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.270613    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:15.770738    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.274680    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:16.776638    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.277654    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:17.766771    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.274911    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:18.780900    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.270050    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.776234    3116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 17:46:19.890591    3116 kubeadm.go:1113] duration metric: took 10.8964655s to wait for elevateKubeSystemPrivileges
	I0716 17:46:19.890776    3116 kubeadm.go:394] duration metric: took 27.4443744s to StartCluster
	I0716 17:46:19.890776    3116 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.890776    3116 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:19.892349    3116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:46:19.894233    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 17:46:19.894233    3116 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:19.894341    3116 start.go:241] waiting for startup goroutines ...
	I0716 17:46:19.894233    3116 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 17:46:19.894432    3116 addons.go:69] Setting storage-provisioner=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:69] Setting default-storageclass=true in profile "ha-339000"
	I0716 17:46:19.894432    3116 addons.go:234] Setting addon storage-provisioner=true in "ha-339000"
	I0716 17:46:19.894432    3116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-339000"
	I0716 17:46:19.894621    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:19.894957    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:19.895901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:19.896148    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:20.057972    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 17:46:20.581090    3116 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.224137    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:22.224360    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:22.225117    3116 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:46:22.226057    3116 kapi.go:59] client config for ha-339000: &rest.Config{Host:"https://172.27.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-339000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 17:46:22.227551    3116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 17:46:22.227763    3116 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 17:46:22.227763    3116 addons.go:234] Setting addon default-storageclass=true in "ha-339000"
	I0716 17:46:22.227763    3116 host.go:66] Checking if "ha-339000" exists ...
	I0716 17:46:22.229355    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:22.230171    3116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:22.230171    3116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 17:46:22.230699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.497831    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:24.647828    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:24.648633    3116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:24.648761    3116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 17:46:24.648901    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000 ).state
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:26.855699    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000 ).networkadapters[0]).ipaddresses[0]
	I0716 17:46:27.196145    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:27.196210    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:27.196210    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:27.342547    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stdout =====>] : 172.27.164.29
	
	I0716 17:46:29.438706    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:29.439652    3116 sshutil.go:53] new ssh client: &{IP:172.27.164.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000\id_rsa Username:docker}
	I0716 17:46:29.571858    3116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 17:46:29.713780    3116 round_trippers.go:463] GET https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 17:46:29.713780    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.713780    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.713780    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.726705    3116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0716 17:46:29.727931    3116 round_trippers.go:463] PUT https://172.27.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 17:46:29.727931    3116 round_trippers.go:469] Request Headers:
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Content-Type: application/json
	I0716 17:46:29.727931    3116 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 17:46:29.727931    3116 round_trippers.go:473]     Accept: application/json, */*
	I0716 17:46:29.731032    3116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 17:46:29.738673    3116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 17:46:29.741426    3116 addons.go:510] duration metric: took 9.8471536s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 17:46:29.741651    3116 start.go:246] waiting for cluster config update ...
	I0716 17:46:29.741651    3116 start.go:255] writing updated cluster config ...
	I0716 17:46:29.745087    3116 out.go:177] 
	I0716 17:46:29.756703    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:46:29.756703    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.763712    3116 out.go:177] * Starting "ha-339000-m02" control-plane node in "ha-339000" cluster
	I0716 17:46:29.772702    3116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:46:29.772702    3116 cache.go:56] Caching tarball of preloaded images
	I0716 17:46:29.773710    3116 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 17:46:29.773710    3116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 17:46:29.773710    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:46:29.775702    3116 start.go:360] acquireMachinesLock for ha-339000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 17:46:29.775702    3116 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-339000-m02"
	I0716 17:46:29.775702    3116 start.go:93] Provisioning new machine with config: &{Name:ha-339000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-339000 Namespace:default APIServerHAVIP:172.27.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.164.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 17:46:29.775702    3116 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 17:46:29.780717    3116 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 17:46:29.780717    3116 start.go:159] libmachine.API.Create for "ha-339000" (driver="hyperv")
	I0716 17:46:29.780717    3116 client.go:168] LocalClient.Create starting
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 17:46:29.780717    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Decoding PEM data...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: Parsing certificate...
	I0716 17:46:29.781705    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:31.564937    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:33.241433    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:34.664681    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:38.134875    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:38.138226    3116 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 17:46:38.592174    3116 main.go:141] libmachine: Creating SSH key...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: Creating VM...
	I0716 17:46:38.817946    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 17:46:41.741213    3116 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 17:46:41.742185    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:41.742185    3116 main.go:141] libmachine: Using switch "Default Switch"
	I0716 17:46:41.742301    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 17:46:43.531294    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:43.531591    3116 main.go:141] libmachine: Creating VHD
	I0716 17:46:43.531591    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C1D531E-ACF9-4B3C-B9C3-95F8F2C01DA3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 17:46:47.250586    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing magic tar header
	I0716 17:46:47.250586    3116 main.go:141] libmachine: Writing SSH key tar header
	I0716 17:46:47.260788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:50.419715    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd' -SizeBytes 20000MB
	I0716 17:46:53.401355    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:53.401639    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-339000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:56.967359    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-339000-m02 -DynamicMemoryEnabled $false
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:46:59.193576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:46:59.194052    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-339000-m02 -Count 2
	I0716 17:47:01.352763    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:01.352941    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\boot2docker.iso'
	I0716 17:47:03.904514    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:03.905518    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:03.905624    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-339000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\disk.vhd'
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:06.552431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:06.553440    3116 main.go:141] libmachine: Starting VM...
	I0716 17:47:06.553440    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-339000-m02
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:10.125433    3116 main.go:141] libmachine: Waiting for host to start...
	I0716 17:47:10.126319    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:12.409194    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:12.409593    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:14.996475    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:14.997057    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:16.007181    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:18.201270    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:18.202297    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:20.802074    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:20.802698    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:21.808577    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:23.994365    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:23.994431    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:26.448364    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:27.449141    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:29.652576    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:29.653475    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stdout =====>] : 
	I0716 17:47:32.134302    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:33.134838    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:35.321539    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:38.030581    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:38.030751    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:40.207884    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:40.208051    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:40.208051    3116 machine.go:94] provisionDockerMachine start ...
	I0716 17:47:40.208144    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:42.387506    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:42.388488    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:44.939946    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:44.941089    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:44.946501    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:44.958457    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:44.958457    3116 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 17:47:45.097092    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 17:47:45.097092    3116 buildroot.go:166] provisioning hostname "ha-339000-m02"
	I0716 17:47:45.097229    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:47.267770    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:47.268756    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:47.268878    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:49.918236    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:49.918806    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:49.925690    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:49.925690    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:49.926273    3116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-339000-m02 && echo "ha-339000-m02" | sudo tee /etc/hostname
	I0716 17:47:50.098399    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-339000-m02
	
	I0716 17:47:50.098399    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:52.289790    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:52.290626    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:52.290788    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:54.811144    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:54.816978    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:47:54.817741    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:47:54.817741    3116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-339000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-339000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-339000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 17:47:54.974078    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 17:47:54.974078    3116 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 17:47:54.974078    3116 buildroot.go:174] setting up certificates
	I0716 17:47:54.974078    3116 provision.go:84] configureAuth start
	I0716 17:47:54.974078    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:47:57.134451    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:57.135234    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:47:59.680288    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:47:59.680874    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:01.778463    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:01.779139    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:04.263622    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:04.263870    3116 provision.go:143] copyHostCerts
	I0716 17:48:04.264008    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 17:48:04.264475    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 17:48:04.264475    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 17:48:04.265108    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 17:48:04.266662    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 17:48:04.267040    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 17:48:04.267040    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 17:48:04.268527    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 17:48:04.268527    3116 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 17:48:04.268527    3116 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 17:48:04.269254    3116 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 17:48:04.270118    3116 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-339000-m02 san=[127.0.0.1 172.27.165.29 ha-339000-m02 localhost minikube]
	I0716 17:48:04.494141    3116 provision.go:177] copyRemoteCerts
	I0716 17:48:04.510510    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 17:48:04.510510    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:06.603238    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:09.110289    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:09.110659    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:09.110937    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:09.226546    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7155306s)
	I0716 17:48:09.226546    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 17:48:09.227051    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0716 17:48:09.276630    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 17:48:09.276892    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0716 17:48:09.322740    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 17:48:09.323035    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 17:48:09.379077    3116 provision.go:87] duration metric: took 14.4049412s to configureAuth
	I0716 17:48:09.379077    3116 buildroot.go:189] setting minikube options for container-runtime
	I0716 17:48:09.379235    3116 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:48:09.379840    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:11.453554    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:11.453894    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:13.968722    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:13.975232    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:13.975232    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:13.975784    3116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 17:48:14.110035    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 17:48:14.110161    3116 buildroot.go:70] root file system type: tmpfs
	I0716 17:48:14.110429    3116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 17:48:14.110429    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:16.224902    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:18.749877    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:18.750448    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:18.756849    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:18.757584    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:18.757584    3116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.164.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 17:48:18.917444    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.164.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 17:48:18.917580    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:21.041780    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:21.042179    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:23.606328    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:23.606973    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:23.613313    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:23.613862    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:23.613862    3116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 17:48:25.941849    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 17:48:25.941899    3116 machine.go:97] duration metric: took 45.7336685s to provisionDockerMachine
	I0716 17:48:25.941981    3116 client.go:171] duration metric: took 1m56.1607204s to LocalClient.Create
	I0716 17:48:25.941981    3116 start.go:167] duration metric: took 1m56.1608026s to libmachine.API.Create "ha-339000"
	I0716 17:48:25.942034    3116 start.go:293] postStartSetup for "ha-339000-m02" (driver="hyperv")
	I0716 17:48:25.942034    3116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 17:48:25.956723    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 17:48:25.956723    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:28.128549    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:28.129159    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:30.690560    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:30.690660    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:30.691078    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:30.804463    3116 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8477204s)
	I0716 17:48:30.818282    3116 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 17:48:30.825927    3116 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 17:48:30.825927    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 17:48:30.826466    3116 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 17:48:30.827574    3116 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 17:48:30.827716    3116 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 17:48:30.839835    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 17:48:30.860232    3116 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 17:48:30.910712    3116 start.go:296] duration metric: took 4.9686594s for postStartSetup
	I0716 17:48:30.913962    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:33.089586    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:33.090289    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:35.575646    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:35.576249    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:35.576249    3116 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-339000\config.json ...
	I0716 17:48:35.579600    3116 start.go:128] duration metric: took 2m5.8033979s to createHost
	I0716 17:48:35.579600    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:37.678780    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:37.678972    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:40.133487    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:40.140023    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:40.140252    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:40.140252    3116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 17:48:40.291190    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177320.294492379
	
	I0716 17:48:40.291249    3116 fix.go:216] guest clock: 1721177320.294492379
	I0716 17:48:40.291249    3116 fix.go:229] Guest: 2024-07-16 17:48:40.294492379 -0700 PDT Remote: 2024-07-16 17:48:35.5796 -0700 PDT m=+333.147425901 (delta=4.714892379s)
	I0716 17:48:40.291331    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:42.427596    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:42.427640    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:42.427943    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:44.913548    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:44.919942    3116 main.go:141] libmachine: Using SSH client type: native
	I0716 17:48:44.920727    3116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.29 22 <nil> <nil>}
	I0716 17:48:44.920727    3116 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721177320
	I0716 17:48:45.069104    3116 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 00:48:40 UTC 2024
	
	I0716 17:48:45.069635    3116 fix.go:236] clock set: Wed Jul 17 00:48:40 UTC 2024
	 (err=<nil>)
	I0716 17:48:45.069635    3116 start.go:83] releasing machines lock for "ha-339000-m02", held for 2m15.2933959s
	I0716 17:48:45.070447    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:47.143370    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:47.144295    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:49.658886    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:49.662219    3116 out.go:177] * Found network options:
	I0716 17:48:49.665622    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.668352    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.671477    3116 out.go:177]   - NO_PROXY=172.27.164.29
	W0716 17:48:49.676037    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 17:48:49.676815    3116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 17:48:49.679805    3116 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 17:48:49.679805    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:49.691804    3116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 17:48:49.692800    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-339000-m02 ).state
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.851480    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.852140    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:51.889675    3116 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:51.890284    3116 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-339000-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 17:48:54.451718    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.451795    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.451795    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stdout =====>] : 172.27.165.29
	
	I0716 17:48:54.477261    3116 main.go:141] libmachine: [stderr =====>] : 
	I0716 17:48:54.477261    3116 sshutil.go:53] new ssh client: &{IP:172.27.165.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-339000-m02\id_rsa Username:docker}
	I0716 17:48:54.557941    3116 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8661173s)
	W0716 17:48:54.558024    3116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 17:48:54.568240    3116 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.888416s)
	W0716 17:48:54.569158    3116 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 17:48:54.571191    3116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 17:48:54.602227    3116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 17:48:54.602388    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:54.602638    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:54.647070    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 17:48:54.678933    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 17:48:54.698568    3116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 17:48:54.710181    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 17:48:54.742965    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 17:48:54.776228    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 17:48:54.821216    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0716 17:48:54.828014    3116 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 17:48:54.828014    3116 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 17:48:54.856026    3116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 17:48:54.887007    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 17:48:54.916961    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 17:48:54.946175    3116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 17:48:54.977133    3116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 17:48:55.008583    3116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 17:48:55.041136    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:55.233128    3116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 17:48:55.268383    3116 start.go:495] detecting cgroup driver to use...
	I0716 17:48:55.280294    3116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 17:48:55.321835    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.360772    3116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 17:48:55.410751    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 17:48:55.446392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.483746    3116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 17:48:55.549392    3116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 17:48:55.575212    3116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 17:48:55.625942    3116 ssh_runner.go:195] Run: which cri-dockerd
	I0716 17:48:55.644117    3116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 17:48:55.662133    3116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 17:48:55.710556    3116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 17:48:55.902702    3116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 17:48:56.092640    3116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 17:48:56.092812    3116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 17:48:56.140744    3116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 17:48:56.339384    3116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 17:49:57.463999    3116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.12424s)
	I0716 17:49:57.479400    3116 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 17:49:57.516551    3116 out.go:177] 
	W0716 17:49:57.521552    3116 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 00:48:24 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.241896977Z" level=info msg="Starting up"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.243318099Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 00:48:24 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:24.244617720Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.275892820Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303001153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303124655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303234156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303252457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303384059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303404659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303626563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303746365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303770365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.303782265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304022869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.304505877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307674327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.307791029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308110834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308400439Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308565642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.308717744Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368314796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368433498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368514799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368720803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368746303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.368889205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369365013Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369596617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369650917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369671218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369692218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369708818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369723219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369742719Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369760119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369776719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369792220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369805420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369827220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369842421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369859621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369896721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369926122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369940122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369953922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369970423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.369986723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370000523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370013123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370030124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370051324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370149925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370230127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370309028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370350129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370375329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370393229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370407730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370430730Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370445430Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370782936Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370940938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.370988139Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 00:48:24 ha-339000-m02 dockerd[672]: time="2024-07-17T00:48:24.371007639Z" level=info msg="containerd successfully booted in 0.096197s"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.318869987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.349661649Z" level=info msg="Loading containers: start."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.538996184Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.777966309Z" level=info msg="Loading containers: done."
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.813805898Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.814032102Z" level=info msg="Daemon has completed initialization"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943488028Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 00:48:25 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:25.943571229Z" level=info msg="API listen on [::]:2376"
	Jul 17 00:48:25 ha-339000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.369757788Z" level=info msg="Processing signal 'terminated'"
	Jul 17 00:48:56 ha-339000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.371659591Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.375774697Z" level=info msg="Daemon shutdown complete"
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376100098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 00:48:56 ha-339000-m02 dockerd[666]: time="2024-07-17T00:48:56.376232698Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 00:48:57 ha-339000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 00:48:57 ha-339000-m02 dockerd[1072]: time="2024-07-17T00:48:57.441674342Z" level=info msg="Starting up"
	Jul 17 00:49:57 ha-339000-m02 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 00:49:57 ha-339000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 17:49:57.521552    3116 out.go:239] * 
	W0716 17:49:57.522536    3116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 17:49:57.526535    3116 out.go:177] 
	
	
	==> Docker <==
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/286718c0567bc4483bcfe087c41990d4da59a6812f976115e9331588a6df0b36/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7188a6b83dabc2793f2a4d404c103e97dd27df147490fdaf17511b238598343d/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:46:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af2cf1f3df1119bd0846692fb05a343436bccea46b6f425a9798d3e0f0988445/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934272927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934770028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.934884528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.993888819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994323820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.994345820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:41 ha-339000 dockerd[1435]: time="2024-07-17T00:46:41.996697524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.055604421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058172312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058527710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:46:42 ha-339000 dockerd[1435]: time="2024-07-17T00:46:42.058934209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.792959218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.793982917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794013917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 dockerd[1435]: time="2024-07-17T00:50:31.794412417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:31 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0eab77abc5c2034e0f9b3cc13c0efde8590dc48e231f9a2a32e3cce640afc3f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 00:50:33 ha-339000 cri-dockerd[1328]: time="2024-07-17T00:50:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.888991028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889060028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889075428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 00:50:33 ha-339000 dockerd[1435]: time="2024-07-17T00:50:33.889180729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cfd9e6da5e26       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   c0eab77abc5c2       busybox-fc5497c4f-2lw5c
	7c292d2d62a8d       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   7188a6b83dabc       coredns-7db6d8ff4d-tnbkg
	7cb40bd8f4a45       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   af2cf1f3df111       storage-provisioner
	3fad8a05f536b       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   286718c0567bc       coredns-7db6d8ff4d-fnphs
	78d47e629c01b       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              27 minutes ago      Running             kindnet-cni               0                   1cac035102228       kindnet-qld5s
	4b78e7e23ac25       53c535741fb44                                                                                         27 minutes ago      Running             kube-proxy                0                   5d3ac3c58f7ff       kube-proxy-pgd84
	191e74eb72132       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     28 minutes ago      Running             kube-vip                  0                   17db6761e1eb3       kube-vip-ha-339000
	0db2b9ec3c99a       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   977642232fb5c       etcd-ha-339000
	ae665f15bfadb       56ce0fd9fb532                                                                                         28 minutes ago      Running             kube-apiserver            0                   73726dfbabaa7       kube-apiserver-ha-339000
	92e8436c41a8e       e874818b3caac                                                                                         28 minutes ago      Running             kube-controller-manager   0                   d786fa5a135ce       kube-controller-manager-ha-339000
	d1feb8291f6eb       7820c83aa1394                                                                                         28 minutes ago      Running             kube-scheduler            0                   deb753b1b1f7d       kube-scheduler-ha-339000
	
	
	==> coredns [3fad8a05f536] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58836 - 64713 "HINFO IN 60853611470180886.8375493230672009972. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027110498s
	[INFO] 10.244.0.4:47774 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.188209086s
	[INFO] 10.244.0.4:54955 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.041826019s
	[INFO] 10.244.0.4:52719 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.104768404s
	[INFO] 10.244.0.4:47694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003149s
	[INFO] 10.244.0.4:59771 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012895106s
	[INFO] 10.244.0.4:35963 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001728s
	[INFO] 10.244.0.4:59023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002336s
	[INFO] 10.244.0.4:60347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0004136s
	[INFO] 10.244.0.4:39498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000534201s
	[INFO] 10.244.0.4:40846 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001588s
	
	
	==> coredns [7c292d2d62a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51201 - 44520 "HINFO IN 5198808949217006063.7204571677786853637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143631558s
	[INFO] 10.244.0.4:38160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004338s
	[INFO] 10.244.0.4:39856 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037714417s
	[INFO] 10.244.0.4:59088 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002258s
	[INFO] 10.244.0.4:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002054s
	[INFO] 10.244.0.4:41808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205501s
	[INFO] 10.244.0.4:51376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003632s
	[INFO] 10.244.0.4:56095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001912s
	[INFO] 10.244.0.4:47792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001197s
	[INFO] 10.244.0.4:60138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001247s
	[INFO] 10.244.0.4:54518 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001883s
	
	
	==> describe nodes <==
	Name:               ha-339000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T17_46_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:14:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:11:06 +0000   Wed, 17 Jul 2024 00:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.29
	  Hostname:    ha-339000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82008871fce64314956fd8270edc8d57
	  System UUID:                841fb39e-176b-8246-932b-b89e25447e5d
	  Boot ID:                    d3e13460-f057-4ba1-bf21-33740644e7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2lw5c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-fnphs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-tnbkg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-339000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-qld5s                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-339000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-339000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-pgd84                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-339000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-339000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node ha-339000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node ha-339000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node ha-339000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node ha-339000 event: Registered Node ha-339000 in Controller
	  Normal  NodeReady                27m                kubelet          Node ha-339000 status is now: NodeReady
	
	
	Name:               ha-339000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-339000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-339000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T18_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-339000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:12:58 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:12:58 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:12:58 +0000   Wed, 17 Jul 2024 01:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:12:58 +0000   Wed, 17 Jul 2024 01:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.164.48
	  Hostname:    ha-339000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff4f98c52674609a5c1f5d575590d85
	  System UUID:                95806f43-d226-fc45-855f-7545f5ff8c84
	  Boot ID:                    189078cc-12dc-4313-b8cc-2bd120e015e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8tbsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-gt8g4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m19s
	  kube-system                 kube-proxy-q8dsk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet          Node ha-339000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet          Node ha-339000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m15s                  node-controller  Node ha-339000-m03 event: Registered Node ha-339000-m03 in Controller
	  Normal  NodeReady                6m48s                  kubelet          Node ha-339000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.626571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.597907] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.180973] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul17 00:45] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.105706] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.560898] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.196598] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.216293] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.857165] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.203644] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.184006] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.281175] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.410238] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +0.098147] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.123832] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.251626] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.094928] kauditd_printk_skb: 70 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.930078] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[ +13.821982] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.915979] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 00:50] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 01:06] hrtimer: interrupt took 1854501 ns
	
	
	==> etcd [0db2b9ec3c99] <==
	{"level":"info","ts":"2024-07-17T00:46:40.36048Z","caller":"traceutil/trace.go:171","msg":"trace[2105760050] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"140.29588ms","start":"2024-07-17T00:46:40.220111Z","end":"2024-07-17T00:46:40.360406Z","steps":["trace[2105760050] 'process raft request'  (duration: 140.03158ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:46:41.65736Z","caller":"traceutil/trace.go:171","msg":"trace[1673640215] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"105.240363ms","start":"2024-07-17T00:46:41.552084Z","end":"2024-07-17T00:46:41.657324Z","steps":["trace[1673640215] 'process raft request'  (duration: 105.115163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:56:01.552908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-17T00:56:01.588072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":976,"took":"34.699039ms","hash":3766188404,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T00:56:01.588121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3766188404,"revision":976,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:01:01.574139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1513}
	{"level":"info","ts":"2024-07-17T01:01:01.585151Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1513,"took":"9.785406ms","hash":3852759921,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:01:01.585617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3852759921,"revision":1513,"compact-revision":976}
	{"level":"info","ts":"2024-07-17T01:04:13.014576Z","caller":"traceutil/trace.go:171","msg":"trace[872493798] transaction","detail":"{read_only:false; response_revision:2392; number_of_response:1; }","duration":"177.131462ms","start":"2024-07-17T01:04:12.837413Z","end":"2024-07-17T01:04:13.014545Z","steps":["trace[872493798] 'process raft request'  (duration: 176.960762ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:01.592724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-07-17T01:06:01.60253Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2051,"took":"8.916702ms","hash":355462830,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-17T01:06:01.602647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":355462830,"revision":2051,"compact-revision":1513}
	{"level":"info","ts":"2024-07-17T01:06:42.274723Z","caller":"traceutil/trace.go:171","msg":"trace[983672699] transaction","detail":"{read_only:false; response_revision:2660; number_of_response:1; }","duration":"112.448025ms","start":"2024-07-17T01:06:42.162253Z","end":"2024-07-17T01:06:42.274701Z","steps":["trace[983672699] 'process raft request'  (duration: 112.241325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:06:42.853896Z","caller":"traceutil/trace.go:171","msg":"trace[679544412] transaction","detail":"{read_only:false; response_revision:2661; number_of_response:1; }","duration":"221.82955ms","start":"2024-07-17T01:06:42.632048Z","end":"2024-07-17T01:06:42.853877Z","steps":["trace[679544412] 'process raft request'  (duration: 221.09335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:07:01.40972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.351031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7064336636883510776 > lease_revoke:<id:620990be27382545>","response":"size:29"}
	{"level":"info","ts":"2024-07-17T01:07:01.409947Z","caller":"traceutil/trace.go:171","msg":"trace[1328045754] linearizableReadLoop","detail":"{readStateIndex:3001; appliedIndex:3000; }","duration":"269.211557ms","start":"2024-07-17T01:07:01.140722Z","end":"2024-07-17T01:07:01.409933Z","steps":["trace[1328045754] 'read index received'  (duration: 122.179226ms)","trace[1328045754] 'applied index is now lower than readState.Index'  (duration: 147.031131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:07:01.410655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.898858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-07-17T01:07:01.410717Z","caller":"traceutil/trace.go:171","msg":"trace[1287806677] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2729; }","duration":"270.008258ms","start":"2024-07-17T01:07:01.140698Z","end":"2024-07-17T01:07:01.410707Z","steps":["trace[1287806677] 'agreement among raft nodes before linearized reading'  (duration: 269.690957ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:07:05.608227Z","caller":"traceutil/trace.go:171","msg":"trace[977721237] transaction","detail":"{read_only:false; response_revision:2744; number_of_response:1; }","duration":"129.521427ms","start":"2024-07-17T01:07:05.478688Z","end":"2024-07-17T01:07:05.608209Z","steps":["trace[977721237] 'process raft request'  (duration: 129.341327ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:11:01.612897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2587}
	{"level":"info","ts":"2024-07-17T01:11:01.626116Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2587,"took":"12.501801ms","hash":3224311936,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1982464,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-17T01:11:01.626215Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3224311936,"revision":2587,"compact-revision":2051}
	{"level":"info","ts":"2024-07-17T01:11:07.411618Z","caller":"traceutil/trace.go:171","msg":"trace[1857286762] transaction","detail":"{read_only:false; response_revision:3223; number_of_response:1; }","duration":"111.812009ms","start":"2024-07-17T01:11:07.299785Z","end":"2024-07-17T01:11:07.411597Z","steps":["trace[1857286762] 'process raft request'  (duration: 111.694809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:11:07.564647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.541611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:11:07.564832Z","caller":"traceutil/trace.go:171","msg":"trace[1937676543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3223; }","duration":"141.741911ms","start":"2024-07-17T01:11:07.423051Z","end":"2024-07-17T01:11:07.564793Z","steps":["trace[1937676543] 'range keys from in-memory index tree'  (duration: 141.472411ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:14:09 up 30 min,  0 users,  load average: 0.22, 0.26, 0.30
	Linux ha-339000 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d47e629c01] <==
	I0717 01:13:07.434655       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:13:17.436923       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:13:17.437024       1 main.go:303] handling current node
	I0717 01:13:17.437045       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:13:17.437889       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:13:27.427551       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:13:27.427605       1 main.go:303] handling current node
	I0717 01:13:27.427624       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:13:27.427631       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:13:37.436772       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:13:37.436913       1 main.go:303] handling current node
	I0717 01:13:37.436934       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:13:37.436943       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:13:47.434052       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:13:47.434230       1 main.go:303] handling current node
	I0717 01:13:47.434362       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:13:47.434526       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:13:57.429087       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:13:57.429185       1 main.go:303] handling current node
	I0717 01:13:57.429202       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:13:57.429225       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:14:07.432749       1 main.go:299] Handling node with IPs: map[172.27.164.48:{}]
	I0717 01:14:07.432787       1 main.go:326] Node ha-339000-m03 has CIDR [10.244.1.0/24] 
	I0717 01:14:07.433056       1 main.go:299] Handling node with IPs: map[172.27.164.29:{}]
	I0717 01:14:07.433196       1 main.go:303] handling current node
	
	
	==> kube-apiserver [ae665f15bfad] <==
	I0717 00:46:04.304358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 00:46:04.331798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 00:46:04.331881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:46:05.619002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:46:05.741062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:46:05.939352       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:46:05.964770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.164.29]
	I0717 00:46:05.966221       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:46:05.976528       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:46:06.365958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0717 00:46:07.505234       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 00:46:07.507598       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 00:46:07.505959       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 166.003µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 00:46:07.508793       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 00:46:07.508861       1 timeout.go:142] post-timeout activity - time-elapsed: 3.693064ms, PATCH "/api/v1/namespaces/default/events/ha-339000.17e2d98174aaf414" result: <nil>
	I0717 00:46:07.616027       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:46:07.651174       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:46:07.685151       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:46:20.222494       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 00:46:20.565491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 01:02:29.377162       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65360: use of closed network connection
	E0717 01:02:30.550086       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65368: use of closed network connection
	E0717 01:02:31.700864       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65377: use of closed network connection
	E0717 01:03:07.351619       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65398: use of closed network connection
	E0717 01:03:17.822592       1 conn.go:339] Error on socket receive: read tcp 172.27.175.254:8443->172.27.160.1:65400: use of closed network connection
	
	
	==> kube-controller-manager [92e8436c41a8] <==
	I0717 00:46:40.548808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="442.901µs"
	I0717 00:46:40.549752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.8µs"
	I0717 00:46:40.586545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.2µs"
	I0717 00:46:40.606661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42µs"
	I0717 00:46:42.880174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.9µs"
	I0717 00:46:43.001198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.866161ms"
	I0717 00:46:43.002503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.9µs"
	I0717 00:46:43.029087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.8µs"
	I0717 00:46:43.078762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.06204ms"
	I0717 00:46:43.078873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.3µs"
	I0717 00:46:44.601036       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 00:50:31.286881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.770922ms"
	I0717 00:50:31.329131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.874464ms"
	I0717 00:50:31.329214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0717 00:50:34.278648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.588945ms"
	I0717 00:50:34.279764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.2µs"
	I0717 01:06:50.412939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-339000-m03\" does not exist"
	I0717 01:06:50.457469       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-339000-m03" podCIDRs=["10.244.1.0/24"]
	I0717 01:06:54.850142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-339000-m03"
	I0717 01:07:21.350361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-339000-m03"
	I0717 01:07:21.400227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.3µs"
	I0717 01:07:21.401000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.8µs"
	I0717 01:07:21.425714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0717 01:07:24.751410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.345403ms"
	I0717 01:07:24.752323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.2µs"
	
	
	==> kube-proxy [4b78e7e23ac2] <==
	I0717 00:46:21.547151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:46:21.569406       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.164.29"]
	I0717 00:46:21.663287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:46:21.663402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:46:21.663470       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:46:21.667791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:46:21.668391       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:46:21.668462       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:46:21.670025       1 config.go:192] "Starting service config controller"
	I0717 00:46:21.670140       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:46:21.670173       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:46:21.670182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:46:21.670934       1 config.go:319] "Starting node config controller"
	I0717 00:46:21.670965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:46:21.770842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:46:21.770856       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:46:21.771242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1feb8291f6e] <==
	W0717 00:46:04.314020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:46:04.314222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:46:04.404772       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:46:04.405391       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:46:04.461176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:46:04.461307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:46:04.470629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:46:04.470832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:46:04.490143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:46:04.490377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:46:04.609486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.609740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.631578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:46:04.631703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:46:04.760247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:46:04.760410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:46:04.830688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.830869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.878065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:46:04.878512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:46:04.894150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.894178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:46:04.922663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:46:04.923043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 00:46:07.101141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:10:07 ha-339000 kubelet[2368]: E0717 01:10:07.789022    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:10:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:10:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:11:07 ha-339000 kubelet[2368]: E0717 01:11:07.791070    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:11:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:11:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:12:07 ha-339000 kubelet[2368]: E0717 01:12:07.787135    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:12:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:12:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:13:07 ha-339000 kubelet[2368]: E0717 01:13:07.787307    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:13:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:13:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:13:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:13:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:14:07 ha-339000 kubelet[2368]: E0717 01:14:07.790264    2368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:14:07 ha-339000 kubelet[2368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:14:07 ha-339000 kubelet[2368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:14:07 ha-339000 kubelet[2368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:14:07 ha-339000 kubelet[2368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:14:01.468465    7476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-339000 -n ha-339000: (12.1144353s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-339000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-7zvzh
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh
helpers_test.go:282: (dbg) kubectl --context ha-339000 describe pod busybox-fc5497c4f-7zvzh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-7zvzh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjd9m (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rjd9m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  8m44s (x5 over 23m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  104s (x3 over 7m1s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (86.55s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (470.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-343600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0716 18:46:05.793384    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 18:47:04.041967    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:49:00.818788    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:51:05.792165    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-343600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (7m15.2908619s)

                                                
                                                
-- stdout --
	* [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.27.170.61
	  - NO_PROXY=172.27.170.61
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:44:16.178769    2528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	* 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-343600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.0171195s)
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.4211879s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                    |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p json-output-728600                     | json-output-728600       | testUser          | v1.33.1 | 16 Jul 24 18:25 PDT | 16 Jul 24 18:25 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| unpause | -p json-output-728600                     | json-output-728600       | testUser          | v1.33.1 | 16 Jul 24 18:25 PDT | 16 Jul 24 18:25 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| stop    | -p json-output-728600                     | json-output-728600       | testUser          | v1.33.1 | 16 Jul 24 18:25 PDT | 16 Jul 24 18:26 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| delete  | -p json-output-728600                     | json-output-728600       | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:26 PDT | 16 Jul 24 18:26 PDT |
	| start   | -p json-output-error-838300               | json-output-error-838300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:26 PDT |                     |
	|         | --memory=2200 --output=json               |                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                 |                          |                   |         |                     |                     |
	| delete  | -p json-output-error-838300               | json-output-error-838300 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:26 PDT | 16 Jul 24 18:26 PDT |
	| start   | -p first-471700                           | first-471700             | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:26 PDT | 16 Jul 24 18:29 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| start   | -p second-471700                          | second-471700            | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:29 PDT | 16 Jul 24 18:32 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| delete  | -p second-471700                          | second-471700            | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:33 PDT | 16 Jul 24 18:34 PDT |
	| delete  | -p first-471700                           | first-471700             | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:34 PDT | 16 Jul 24 18:35 PDT |
	| start   | -p mount-start-1-477500                   | mount-start-1-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:35 PDT | 16 Jul 24 18:37 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-1-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:37 PDT |                     |
	|         | --profile mount-start-1-477500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-1-477500 ssh -- ls            | mount-start-1-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:37 PDT | 16 Jul 24 18:37 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| start   | -p mount-start-2-477500                   | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:37 PDT | 16 Jul 24 18:40 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:40 PDT |                     |
	|         | --profile mount-start-2-477500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-477500 ssh -- ls            | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:40 PDT | 16 Jul 24 18:40 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-1-477500                   | mount-start-1-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:40 PDT | 16 Jul 24 18:41 PDT |
	|         | --alsologtostderr -v=5                    |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-477500 ssh -- ls            | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:41 PDT | 16 Jul 24 18:41 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| stop    | -p mount-start-2-477500                   | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:41 PDT | 16 Jul 24 18:41 PDT |
	| start   | -p mount-start-2-477500                   | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:41 PDT | 16 Jul 24 18:43 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT |                     |
	|         | --profile mount-start-2-477500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-477500 ssh -- ls            | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT | 16 Jul 24 18:43 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-2-477500                   | mount-start-2-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT | 16 Jul 24 18:44 PDT |
	| delete  | -p mount-start-1-477500                   | mount-start-1-477500     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT | 16 Jul 24 18:44 PDT |
	| start   | -p multinode-343600                       | multinode-343600         | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT |                     |
	|         | --wait=true --memory=2200                 |                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                            |                          |                   |         |                     |                     |
	|         | --alsologtostderr                         |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:38 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:38.088367782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:38 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e33a722a67030954960c36f8f05d46aee11f2cdde88f41e6495c8a1744d26934/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:42 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:42Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240715-585640e9: Status: Downloaded newer image for kindest/kindnetd:v20240715-585640e9"
	Jul 17 01:47:42 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:42.660169622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:42 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:42.660418707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:42 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:42.660436506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:42 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:42.661377350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.440860184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441011976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441024876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	832a042d8e687       cbb01a7bd410d                                                                              3 minutes ago       Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                              3 minutes ago       Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493   4 minutes ago       Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                              4 minutes ago       Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                              4 minutes ago       Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                              4 minutes ago       Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                              4 minutes ago       Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                              4 minutes ago       Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:51:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:47:56 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:47:56 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:47:56 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:47:56 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m15s
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m16s
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m14s  kube-proxy       
	  Normal  Starting                 4m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m30s  kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s  kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s  kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m16s  node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                3m56s  kubelet          Node multinode-343600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.211180] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T01:47:16.138582Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:47:16.13865Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.170.61:2380"}
	{"level":"info","ts":"2024-07-17T01:47:16.141568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.170.61:2380"}
	{"level":"info","ts":"2024-07-17T01:47:16.145398Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:47:16.145321Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c0019e2fa7559460","initial-advertise-peer-urls":["https://172.27.170.61:2380"],"listen-peer-urls":["https://172.27.170.61:2380"],"advertise-client-urls":["https://172.27.170.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.170.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:47:16.439773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgPreVoteResp from c0019e2fa7559460 at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.439996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgVoteResp from c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0019e2fa7559460 elected leader c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.449774Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.459791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0019e2fa7559460","local-member-attributes":"{Name:multinode-343600 ClientURLs:[https://172.27.170.61:2379]}","request-path":"/0/members/c0019e2fa7559460/attributes","cluster-id":"71f3988bef0ae63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:47:16.460016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.462625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.469801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:47:16.470286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"71f3988bef0ae63d","local-member-id":"c0019e2fa7559460","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.470449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.477238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.470798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.477293Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.495782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.170.61:2379"}
	{"level":"info","ts":"2024-07-17T01:47:42.531787Z","caller":"traceutil/trace.go:171","msg":"trace[1471548533] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"106.860317ms","start":"2024-07-17T01:47:42.424899Z","end":"2024-07-17T01:47:42.53176Z","steps":["trace[1471548533] 'process raft request'  (duration: 106.667729ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:10.211715Z","caller":"traceutil/trace.go:171","msg":"trace[769534795] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"163.080459ms","start":"2024-07-17T01:48:10.048615Z","end":"2024-07-17T01:48:10.211696Z","steps":["trace[769534795] 'process raft request'  (duration: 162.973778ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:51:52 up 6 min,  0 users,  load average: 0.45, 0.43, 0.22
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 01:49:44.271385       1 main.go:303] handling current node
	I0717 01:49:54.274682       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:49:54.274833       1 main.go:303] handling current node
	I0717 01:50:04.278120       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:04.278249       1 main.go:303] handling current node
	I0717 01:50:14.271759       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:14.271902       1 main.go:303] handling current node
	I0717 01:50:24.272721       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:24.272881       1 main.go:303] handling current node
	I0717 01:50:34.279457       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:34.279667       1 main.go:303] handling current node
	I0717 01:50:44.272359       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:44.272428       1 main.go:303] handling current node
	I0717 01:50:54.272987       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:50:54.273043       1 main.go:303] handling current node
	I0717 01:51:04.271747       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:51:04.271818       1 main.go:303] handling current node
	I0717 01:51:14.274851       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:51:14.274924       1 main.go:303] handling current node
	I0717 01:51:24.274457       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:51:24.274596       1 main.go:303] handling current node
	I0717 01:51:34.272994       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:51:34.273201       1 main.go:303] handling current node
	I0717 01:51:44.271287       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 01:51:44.271380       1 main.go:303] handling current node
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.549386       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:47:18.563813       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:47:18.563958       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:47:18.564067       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:47:18.564074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:36.046092       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 01:47:36.052919       1 shared_informer.go:320] Caches are synced for taint
	I0717 01:47:36.054046       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:47:36.054255       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-343600"
	I0717 01:47:36.054441       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0717 01:47:36.059274       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:47:36.078491       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:47:36.090896       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:47:36.462784       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:36.463023       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:47:36.482532       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:37.218430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="597.659389ms"
	I0717 01:47:37.302589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.105747ms"
	I0717 01:47:37.357945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.260418ms"
	I0717 01:47:37.358351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="245.084µs"
	I0717 01:47:37.775077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.40057ms"
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:47:56 multinode-343600 kubelet[2292]: I0717 01:47:56.958558    2292 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a250328b-d9b2-4190-bf67-f997fd8bf662-config-volume\") pod \"coredns-7db6d8ff4d-mmfw4\" (UID: \"a250328b-d9b2-4190-bf67-f997fd8bf662\") " pod="kube-system/coredns-7db6d8ff4d-mmfw4"
	Jul 17 01:47:57 multinode-343600 kubelet[2292]: I0717 01:47:57.609912    2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd"
	Jul 17 01:47:57 multinode-343600 kubelet[2292]: I0717 01:47:57.680885    2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4"
	Jul 17 01:47:58 multinode-343600 kubelet[2292]: I0717 01:47:58.717263    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.717245915 podStartE2EDuration="14.717245915s" podCreationTimestamp="2024-07-17 01:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 01:47:58.716953683 +0000 UTC m=+36.823257815" watchObservedRunningTime="2024-07-17 01:47:58.717245915 +0000 UTC m=+36.823549947"
	Jul 17 01:47:58 multinode-343600 kubelet[2292]: I0717 01:47:58.739650    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mmfw4" podStartSLOduration=21.738893143 podStartE2EDuration="21.738893143s" podCreationTimestamp="2024-07-17 01:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 01:47:58.738313679 +0000 UTC m=+36.844617711" watchObservedRunningTime="2024-07-17 01:47:58.738893143 +0000 UTC m=+36.845197175"
	Jul 17 01:48:22 multinode-343600 kubelet[2292]: E0717 01:48:22.221493    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:48:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:48:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:48:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:48:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:49:22 multinode-343600 kubelet[2292]: E0717 01:49:22.210621    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:49:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:49:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:49:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:49:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:50:22 multinode-343600 kubelet[2292]: E0717 01:50:22.203888    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:50:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:50:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:50:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:50:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:51:22 multinode-343600 kubelet[2292]: E0717 01:51:22.202333    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:51:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:51:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:51:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:51:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a5100a7b9d17] <==
	I0717 01:47:57.907400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:47:57.925026       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:47:57.925084       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:47:57.939262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:47:57.939413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	I0717 01:47:57.942709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36c98cc7-49ba-416f-9ed9-321db1dd67ba", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c became leader
	I0717 01:47:58.040874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:51:44.623927    9932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (11.9608904s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (470.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (724.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- rollout status deployment/busybox
E0716 18:52:29.007075    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 18:54:00.808296    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:56:05.807147    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 18:59:00.811639    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:01:05.802456    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- rollout status deployment/busybox: exit status 1 (10m3.2611828s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:52:06.930114    2200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:10.187595   11416 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:11.891099   14532 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:14.045087    7064 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:16.690502   10384 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:21.728865    7276 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:26.186526   14056 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:32.174376   14064 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:39.671259    9772 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:02:59.869518    8080 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:03:31.980936    9476 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0716 19:03:31.980936    9476 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- nslookup kubernetes.io: (1.7197369s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.io: exit status 1 (348.9131ms)

                                                
                                                
** stderr ** 
	W0716 19:03:34.366814    7952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-xwt6c does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-xwt6c could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.default: exit status 1 (326.6757ms)

                                                
                                                
** stderr ** 
	W0716 19:03:35.299919    9456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-xwt6c does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-xwt6c could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (341.2524ms)

                                                
                                                
** stderr ** 
	W0716 19:03:36.047642   10548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-xwt6c does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-xwt6c could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
E0716 19:03:44.059553    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.2700427s)
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.3227326s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-477500                           | mount-start-2-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:41 PDT | 16 Jul 24 18:43 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT |                     |
	|         | --profile mount-start-2-477500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-477500 ssh -- ls                    | mount-start-2-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT | 16 Jul 24 18:43 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-477500                           | mount-start-2-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:43 PDT | 16 Jul 24 18:44 PDT |
	| delete  | -p mount-start-1-477500                           | mount-start-1-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT | 16 Jul 24 18:44 PDT |
	| start   | -p multinode-343600                               | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- apply -f                   | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT | 16 Jul 24 18:52 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- rollout                    | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              16 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         16 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         16 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         16 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         16 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:03:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                16m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T01:47:16.439893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgPreVoteResp from c0019e2fa7559460 at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.439996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgVoteResp from c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0019e2fa7559460 elected leader c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.449774Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.459791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0019e2fa7559460","local-member-attributes":"{Name:multinode-343600 ClientURLs:[https://172.27.170.61:2379]}","request-path":"/0/members/c0019e2fa7559460/attributes","cluster-id":"71f3988bef0ae63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:47:16.460016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.462625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.469801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:47:16.470286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"71f3988bef0ae63d","local-member-id":"c0019e2fa7559460","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.470449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.477238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.470798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.477293Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.495782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.170.61:2379"}
	{"level":"info","ts":"2024-07-17T01:47:42.531787Z","caller":"traceutil/trace.go:171","msg":"trace[1471548533] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"106.860317ms","start":"2024-07-17T01:47:42.424899Z","end":"2024-07-17T01:47:42.53176Z","steps":["trace[1471548533] 'process raft request'  (duration: 106.667729ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:10.211715Z","caller":"traceutil/trace.go:171","msg":"trace[769534795] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"163.080459ms","start":"2024-07-17T01:48:10.048615Z","end":"2024-07-17T01:48:10.211696Z","steps":["trace[769534795] 'process raft request'  (duration: 162.973778ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:57:16.612011Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":656}
	{"level":"info","ts":"2024-07-17T01:57:16.630662Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":656,"took":"17.926243ms","hash":3956697326,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2084864,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-17T01:57:16.630769Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3956697326,"revision":656,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T02:02:16.631242Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":895}
	{"level":"info","ts":"2024-07-17T02:02:16.642963Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":895,"took":"10.947623ms","hash":447313257,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1486848,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-17T02:02:16.643085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":447313257,"revision":895,"compact-revision":656}
	
	
	==> kernel <==
	 02:03:56 up 18 min,  0 users,  load average: 0.40, 0.48, 0.33
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:01:54.274625       1 main.go:303] handling current node
	I0717 02:02:04.277779       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:04.277960       1 main.go:303] handling current node
	I0717 02:02:14.275569       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:14.275737       1 main.go:303] handling current node
	I0717 02:02:24.277092       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:24.277179       1 main.go:303] handling current node
	I0717 02:02:34.276007       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:34.276108       1 main.go:303] handling current node
	I0717 02:02:44.272103       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:44.272157       1 main.go:303] handling current node
	I0717 02:02:54.281136       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:54.281238       1 main.go:303] handling current node
	I0717 02:03:04.277796       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:04.277907       1 main.go:303] handling current node
	I0717 02:03:14.280871       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:14.281079       1 main.go:303] handling current node
	I0717 02:03:24.280830       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:24.280866       1 main.go:303] handling current node
	I0717 02:03:34.272949       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:34.273082       1 main.go:303] handling current node
	I0717 02:03:44.271999       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:44.272234       1 main.go:303] handling current node
	I0717 02:03:54.277629       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:54.277668       1 main.go:303] handling current node
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564067       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:47:18.564074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:36.078491       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:47:36.090896       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:47:36.462784       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:36.463023       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:47:36.482532       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:37.218430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="597.659389ms"
	I0717 01:47:37.302589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.105747ms"
	I0717 01:47:37.357945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.260418ms"
	I0717 01:47:37.358351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="245.084µs"
	I0717 01:47:37.775077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.40057ms"
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:59:22 multinode-343600 kubelet[2292]: E0717 01:59:22.201890    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:59:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:59:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:59:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:59:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:00:22 multinode-343600 kubelet[2292]: E0717 02:00:22.202093    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:01:22 multinode-343600 kubelet[2292]: E0717 02:01:22.203029    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:02:22 multinode-343600 kubelet[2292]: E0717 02:02:22.203137    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:03:22 multinode-343600 kubelet[2292]: E0717 02:03:22.203908    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a5100a7b9d17] <==
	I0717 01:47:57.907400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:47:57.925026       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:47:57.925084       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:47:57.939262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:47:57.939413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	I0717 01:47:57.942709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36c98cc7-49ba-416f-9ed9-321db1dd67ba", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c became leader
	I0717 01:47:58.040874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:03:48.663081    2476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
E0716 19:04:00.813139    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (12.0724425s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-xwt6c
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-343600 describe pod busybox-fc5497c4f-xwt6c
helpers_test.go:282: (dbg) kubectl --context multinode-343600 describe pod busybox-fc5497c4f-xwt6c:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-xwt6c
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mnw9c (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mnw9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  108s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (724.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (45.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- sh -c "ping -c 1 172.27.160.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-9zzvz -- sh -c "ping -c 1 172.27.160.1": exit status 1 (10.440592s)

                                                
                                                
-- stdout --
	PING 172.27.160.1 (172.27.160.1): 56 data bytes
	
	--- 172.27.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:04:11.652753    4064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.27.160.1) from pod (busybox-fc5497c4f-9zzvz): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-343600 -- exec busybox-fc5497c4f-xwt6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (374.2597ms)

                                                
                                                
** stderr ** 
	W0716 19:04:22.079053    3468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-xwt6c does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-xwt6c could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.2019258s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.3763267s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-477500                           | mount-start-1-477500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT | 16 Jul 24 18:44 PDT |
	| start   | -p multinode-343600                               | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- apply -f                   | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT | 16 Jul 24 18:52 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- rollout                    | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | busybox-fc5497c4f-9zzvz                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-9zzvz -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600     | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              17 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         17 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         17 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         17 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         17 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	[INFO] 10.244.0.3:49356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333991s
	[INFO] 10.244.0.3:39090 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137797s
	[INFO] 10.244.0.3:50560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244893s
	[INFO] 10.244.0.3:44091 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164296s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:04:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:02:39 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                16m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T01:47:16.439893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgPreVoteResp from c0019e2fa7559460 at term 1"}
	{"level":"info","ts":"2024-07-17T01:47:16.439987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.439996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 received MsgVoteResp from c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0019e2fa7559460 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.440027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0019e2fa7559460 elected leader c0019e2fa7559460 at term 2"}
	{"level":"info","ts":"2024-07-17T01:47:16.449774Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.459791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0019e2fa7559460","local-member-attributes":"{Name:multinode-343600 ClientURLs:[https://172.27.170.61:2379]}","request-path":"/0/members/c0019e2fa7559460/attributes","cluster-id":"71f3988bef0ae63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:47:16.460016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.462625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:47:16.469801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:47:16.470286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"71f3988bef0ae63d","local-member-id":"c0019e2fa7559460","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.470449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.477238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:47:16.470798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.477293Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:47:16.495782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.170.61:2379"}
	{"level":"info","ts":"2024-07-17T01:47:42.531787Z","caller":"traceutil/trace.go:171","msg":"trace[1471548533] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"106.860317ms","start":"2024-07-17T01:47:42.424899Z","end":"2024-07-17T01:47:42.53176Z","steps":["trace[1471548533] 'process raft request'  (duration: 106.667729ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:10.211715Z","caller":"traceutil/trace.go:171","msg":"trace[769534795] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"163.080459ms","start":"2024-07-17T01:48:10.048615Z","end":"2024-07-17T01:48:10.211696Z","steps":["trace[769534795] 'process raft request'  (duration: 162.973778ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:57:16.612011Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":656}
	{"level":"info","ts":"2024-07-17T01:57:16.630662Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":656,"took":"17.926243ms","hash":3956697326,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2084864,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-17T01:57:16.630769Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3956697326,"revision":656,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T02:02:16.631242Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":895}
	{"level":"info","ts":"2024-07-17T02:02:16.642963Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":895,"took":"10.947623ms","hash":447313257,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1486848,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-17T02:02:16.643085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":447313257,"revision":895,"compact-revision":656}
	
	
	==> kernel <==
	 02:04:42 up 19 min,  0 users,  load average: 0.40, 0.48, 0.33
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:02:34.276108       1 main.go:303] handling current node
	I0717 02:02:44.272103       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:44.272157       1 main.go:303] handling current node
	I0717 02:02:54.281136       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:02:54.281238       1 main.go:303] handling current node
	I0717 02:03:04.277796       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:04.277907       1 main.go:303] handling current node
	I0717 02:03:14.280871       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:14.281079       1 main.go:303] handling current node
	I0717 02:03:24.280830       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:24.280866       1 main.go:303] handling current node
	I0717 02:03:34.272949       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:34.273082       1 main.go:303] handling current node
	I0717 02:03:44.271999       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:44.272234       1 main.go:303] handling current node
	I0717 02:03:54.277629       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:03:54.277668       1 main.go:303] handling current node
	I0717 02:04:04.272932       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:04:04.272988       1 main.go:303] handling current node
	I0717 02:04:14.274928       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:04:14.275035       1 main.go:303] handling current node
	I0717 02:04:24.279536       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:04:24.280034       1 main.go:303] handling current node
	I0717 02:04:34.278022       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:04:34.278153       1 main.go:303] handling current node
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	E0717 02:04:11.454010       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49860: use of closed network connection
	E0717 02:04:21.903848       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49862: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:36.078491       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:47:36.090896       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:47:36.462784       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:36.463023       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:47:36.482532       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:47:37.218430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="597.659389ms"
	I0717 01:47:37.302589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.105747ms"
	I0717 01:47:37.357945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.260418ms"
	I0717 01:47:37.358351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="245.084µs"
	I0717 01:47:37.775077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.40057ms"
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:00:22 multinode-343600 kubelet[2292]: E0717 02:00:22.202093    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:00:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:01:22 multinode-343600 kubelet[2292]: E0717 02:01:22.203029    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:01:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:02:22 multinode-343600 kubelet[2292]: E0717 02:02:22.203137    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:02:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:03:22 multinode-343600 kubelet[2292]: E0717 02:03:22.203908    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:03:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:04:22 multinode-343600 kubelet[2292]: E0717 02:04:22.212987    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a5100a7b9d17] <==
	I0717 01:47:57.907400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:47:57.925026       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:47:57.925084       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:47:57.939262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:47:57.939413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	I0717 01:47:57.942709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36c98cc7-49ba-416f-9ed9-321db1dd67ba", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c became leader
	I0717 01:47:58.040874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-343600_ea22fbf4-24a8-4e78-bff2-995a75ed759c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:04:34.664246   10376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (11.90995s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-xwt6c
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-343600 describe pod busybox-fc5497c4f-xwt6c
helpers_test.go:282: (dbg) kubectl --context multinode-343600 describe pod busybox-fc5497c4f-xwt6c:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-xwt6c
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mnw9c (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mnw9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m34s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (45.59s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (273.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-343600 -v 3 --alsologtostderr
E0716 19:06:05.805214    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-343600 -v 3 --alsologtostderr: (3m21.903514s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr: exit status 2 (36.5119365s)

                                                
                                                
-- stdout --
	multinode-343600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-343600-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-343600-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:08:18.368527   13836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 19:08:18.377887   13836 out.go:291] Setting OutFile to fd 984 ...
	I0716 19:08:18.378222   13836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:08:18.378222   13836 out.go:304] Setting ErrFile to fd 976...
	I0716 19:08:18.378222   13836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:08:18.395751   13836 out.go:298] Setting JSON to false
	I0716 19:08:18.395751   13836 mustload.go:65] Loading cluster: multinode-343600
	I0716 19:08:18.396727   13836 notify.go:220] Checking for updates...
	I0716 19:08:18.397486   13836 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 19:08:18.397984   13836 status.go:255] checking status of multinode-343600 ...
	I0716 19:08:18.398242   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:08:20.645809   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:20.645921   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:20.645921   13836 status.go:330] multinode-343600 host status = "Running" (err=<nil>)
	I0716 19:08:20.645921   13836 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:08:20.646701   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:08:22.863827   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:22.864044   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:22.864138   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:25.518653   13836 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:08:25.518653   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:25.519456   13836 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:08:25.533431   13836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:08:25.533431   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:08:27.738894   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:27.738894   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:27.739075   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:30.326461   13836 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:08:30.326461   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:30.326871   13836 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 19:08:30.432110   13836 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8986626s)
	I0716 19:08:30.445208   13836 ssh_runner.go:195] Run: systemctl --version
	I0716 19:08:30.467448   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:08:30.494369   13836 kubeconfig.go:125] found "multinode-343600" server: "https://172.27.170.61:8443"
	I0716 19:08:30.494369   13836 api_server.go:166] Checking apiserver status ...
	I0716 19:08:30.506307   13836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 19:08:30.546423   13836 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup
	W0716 19:08:30.570122   13836 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 19:08:30.582072   13836 ssh_runner.go:195] Run: ls
	I0716 19:08:30.590626   13836 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 19:08:30.598663   13836 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 19:08:30.598663   13836 status.go:422] multinode-343600 apiserver status = Running (err=<nil>)
	I0716 19:08:30.598663   13836 status.go:257] multinode-343600 status: &{Name:multinode-343600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:08:30.599261   13836 status.go:255] checking status of multinode-343600-m02 ...
	I0716 19:08:30.599542   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:08:32.814014   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:32.814014   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:32.814332   13836 status.go:330] multinode-343600-m02 host status = "Running" (err=<nil>)
	I0716 19:08:32.814332   13836 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:08:32.815349   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:08:35.009621   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:35.009621   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:35.009621   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:37.622492   13836 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:08:37.623619   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:37.623619   13836 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:08:37.636725   13836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:08:37.636725   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:08:39.809512   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:39.809512   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:39.810583   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:42.433802   13836 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:08:42.434096   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:42.434235   13836 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 19:08:42.540247   13836 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9035053s)
	I0716 19:08:42.554609   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:08:42.584000   13836 status.go:257] multinode-343600-m02 status: &{Name:multinode-343600-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:08:42.584000   13836 status.go:255] checking status of multinode-343600-m03 ...
	I0716 19:08:42.585109   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:08:44.792015   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:44.792015   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:44.792907   13836 status.go:330] multinode-343600-m03 host status = "Running" (err=<nil>)
	I0716 19:08:44.792907   13836 host.go:66] Checking if "multinode-343600-m03" exists ...
	I0716 19:08:44.793698   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:08:47.043708   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:47.044434   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:47.045014   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:49.738780   13836 main.go:141] libmachine: [stdout =====>] : 172.27.173.202
	
	I0716 19:08:49.738780   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:49.738780   13836 host.go:66] Checking if "multinode-343600-m03" exists ...
	I0716 19:08:49.759570   13836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:08:49.759570   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:08:51.990248   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:08:51.990248   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:51.990936   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:08:54.596540   13836 main.go:141] libmachine: [stdout =====>] : 172.27.173.202
	
	I0716 19:08:54.596560   13836 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:08:54.596869   13836 sshutil.go:53] new ssh client: &{IP:172.27.173.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m03\id_rsa Username:docker}
	I0716 19:08:54.698233   13836 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9386453s)
	I0716 19:08:54.713655   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:08:54.739006   13836 status.go:257] multinode-343600-m03 status: &{Name:multinode-343600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
E0716 19:09:00.816392    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.2620738s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
E0716 19:09:09.012600    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.6485634s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-343600                               | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- apply -f                   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT | 16 Jul 24 18:52 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- rollout                    | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | busybox-fc5497c4f-9zzvz                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-9zzvz -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| node    | add -p multinode-343600 -v 3                      | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:08 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              21 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         21 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         21 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         21 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         21 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         21 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	[INFO] 10.244.0.3:49356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333991s
	[INFO] 10.244.0.3:39090 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137797s
	[INFO] 10.244.0.3:50560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244893s
	[INFO] 10.244.0.3:44091 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164296s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:09:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                21m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	Name:               multinode-343600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T19_07_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:07:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:08:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.173.202
	  Hostname:    multinode-343600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c97ec282efd48b88cab0b67f2c8f7c2
	  System UUID:                bad18aee-b3d1-0c44-b82f-1f20fb05d065
	  Boot ID:                    33c029cd-4782-43da-a050-56424fd1feae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwt6c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-ghs2x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      88s
	  kube-system                 kube-proxy-4bg7x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  NodeHasSufficientMemory  88s (x2 over 88s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x2 over 88s)  kubelet          Node multinode-343600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x2 over 88s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                node-controller  Node multinode-343600-m03 event: Registered Node multinode-343600-m03 in Controller
	  Normal  NodeReady                59s                kubelet          Node multinode-343600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T02:05:13.843808Z","caller":"traceutil/trace.go:171","msg":"trace[1739602045] linearizableReadLoop","detail":"{readStateIndex:1507; appliedIndex:1506; }","duration":"107.913433ms","start":"2024-07-17T02:05:13.735876Z","end":"2024-07-17T02:05:13.84379Z","steps":["trace[1739602045] 'read index received'  (duration: 107.540343ms)","trace[1739602045] 'applied index is now lower than readState.Index'  (duration: 372.39µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:05:13.844005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.068229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:05:13.844085Z","caller":"traceutil/trace.go:171","msg":"trace[1309265040] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1280; }","duration":"108.230624ms","start":"2024-07-17T02:05:13.735844Z","end":"2024-07-17T02:05:13.844075Z","steps":["trace[1309265040] 'agreement among raft nodes before linearized reading'  (duration: 108.040129ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:05:13.84481Z","caller":"traceutil/trace.go:171","msg":"trace[1249349102] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"172.038629ms","start":"2024-07-17T02:05:13.672761Z","end":"2024-07-17T02:05:13.8448Z","steps":["trace[1249349102] 'process raft request'  (duration: 170.732764ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:05:18.090986Z","caller":"traceutil/trace.go:171","msg":"trace[486786045] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"108.572613ms","start":"2024-07-17T02:05:17.982392Z","end":"2024-07-17T02:05:18.090964Z","steps":["trace[486786045] 'process raft request'  (duration: 108.31692ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:16.649225Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2024-07-17T02:07:16.65943Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1137,"took":"9.63174ms","hash":61041692,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-17T02:07:16.659558Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":61041692,"revision":1137,"compact-revision":895}
	{"level":"info","ts":"2024-07-17T02:07:51.533931Z","caller":"traceutil/trace.go:171","msg":"trace[462829157] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"230.454648ms","start":"2024-07-17T02:07:51.303457Z","end":"2024-07-17T02:07:51.533912Z","steps":["trace[462829157] 'process raft request'  (duration: 230.337651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.534107Z","caller":"traceutil/trace.go:171","msg":"trace[2024600941] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1700; }","duration":"209.685912ms","start":"2024-07-17T02:07:51.324411Z","end":"2024-07-17T02:07:51.534097Z","steps":["trace[2024600941] 'read index received'  (duration: 209.681812ms)","trace[2024600941] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.534885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.788109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T02:07:51.53521Z","caller":"traceutil/trace.go:171","msg":"trace[1749208603] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1438; }","duration":"210.773183ms","start":"2024-07-17T02:07:51.324407Z","end":"2024-07-17T02:07:51.53518Z","steps":["trace[1749208603] 'agreement among raft nodes before linearized reading'  (duration: 209.719411ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.684235Z","caller":"traceutil/trace.go:171","msg":"trace[1696915811] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"315.91493ms","start":"2024-07-17T02:07:51.3683Z","end":"2024-07-17T02:07:51.684215Z","steps":["trace[1696915811] 'process raft request'  (duration: 269.338893ms)","trace[1696915811] 'compare'  (duration: 46.000452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.684483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.073221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:07:51.684879Z","caller":"traceutil/trace.go:171","msg":"trace[788779948] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1440; }","duration":"154.559007ms","start":"2024-07-17T02:07:51.530309Z","end":"2024-07-17T02:07:51.684868Z","steps":["trace[788779948] 'agreement among raft nodes before linearized reading'  (duration: 153.972223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:07:51.686157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T02:07:51.368284Z","time spent":"316.016028ms","remote":"127.0.0.1:54094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-343600-m03\" mod_revision:1435 > success:<request_put:<key:\"/registry/minions/multinode-343600-m03\" value_size:2787 >> failure:<request_range:<key:\"/registry/minions/multinode-343600-m03\" > >"}
	{"level":"info","ts":"2024-07-17T02:07:51.684259Z","caller":"traceutil/trace.go:171","msg":"trace[733279489] linearizableReadLoop","detail":"{readStateIndex:1701; appliedIndex:1700; }","duration":"149.085956ms","start":"2024-07-17T02:07:51.535161Z","end":"2024-07-17T02:07:51.684247Z","steps":["trace[733279489] 'read index received'  (duration: 102.314225ms)","trace[733279489] 'applied index is now lower than readState.Index'  (duration: 46.770731ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:57.933889Z","caller":"traceutil/trace.go:171","msg":"trace[1157037549] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"134.713343ms","start":"2024-07-17T02:07:57.799153Z","end":"2024-07-17T02:07:57.933866Z","steps":["trace[1157037549] 'process raft request'  (duration: 118.150293ms)","trace[1157037549] 'compare'  (duration: 16.437454ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.084008Z","caller":"traceutil/trace.go:171","msg":"trace[861469173] transaction","detail":"{read_only:false; response_revision:1449; number_of_response:1; }","duration":"191.891891ms","start":"2024-07-17T02:07:57.892075Z","end":"2024-07-17T02:07:58.083967Z","steps":["trace[861469173] 'process raft request'  (duration: 162.879779ms)","trace[861469173] 'compare'  (duration: 28.877116ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.281477Z","caller":"traceutil/trace.go:171","msg":"trace[1029922395] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"152.699855ms","start":"2024-07-17T02:07:58.128759Z","end":"2024-07-17T02:07:58.281459Z","steps":["trace[1029922395] 'process raft request'  (duration: 73.524105ms)","trace[1029922395] 'compare'  (duration: 78.894858ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:08:02.438563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.888134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-343600-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-17T02:08:02.438671Z","caller":"traceutil/trace.go:171","msg":"trace[1739914459] range","detail":"{range_begin:/registry/minions/multinode-343600-m03; range_end:; response_count:1; response_revision:1459; }","duration":"183.056129ms","start":"2024-07-17T02:08:02.255602Z","end":"2024-07-17T02:08:02.438658Z","steps":["trace[1739914459] 'range keys from in-memory index tree'  (duration: 182.583642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:08:02.438582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.136257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-17T02:08:02.439152Z","caller":"traceutil/trace.go:171","msg":"trace[89915440] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1459; }","duration":"134.726841ms","start":"2024-07-17T02:08:02.304415Z","end":"2024-07-17T02:08:02.439141Z","steps":["trace[89915440] 'range keys from in-memory index tree'  (duration: 133.989162ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:08:02.583228Z","caller":"traceutil/trace.go:171","msg":"trace[1380485395] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"136.847484ms","start":"2024-07-17T02:08:02.44636Z","end":"2024-07-17T02:08:02.583207Z","steps":["trace[1380485395] 'process raft request'  (duration: 136.606391ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:09:15 up 24 min,  0 users,  load average: 0.38, 0.38, 0.32
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:08:14.275996       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:08:24.272845       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:08:24.273067       1 main.go:303] handling current node
	I0717 02:08:24.273114       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:08:24.273163       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:08:34.272079       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:08:34.272226       1 main.go:303] handling current node
	I0717 02:08:34.272244       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:08:34.272252       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:08:44.272082       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:08:44.272121       1 main.go:303] handling current node
	I0717 02:08:44.272136       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:08:44.272152       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:08:54.275982       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:08:54.276109       1 main.go:303] handling current node
	I0717 02:08:54.276133       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:08:54.276158       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:09:04.271601       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:09:04.271678       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:09:04.271901       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:09:04.272049       1 main.go:303] handling current node
	I0717 02:09:14.280793       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:09:14.280896       1 main.go:303] handling current node
	I0717 02:09:14.280917       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:09:14.280926       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	E0717 02:04:11.454010       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49860: use of closed network connection
	E0717 02:04:21.903848       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49862: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:37.358351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="245.084µs"
	I0717 01:47:37.775077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.40057ms"
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0717 02:07:47.450050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-343600-m03\" does not exist"
	I0717 02:07:47.466038       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-343600-m03" podCIDRs=["10.244.1.0/24"]
	I0717 02:07:51.299816       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-343600-m03"
	I0717 02:08:16.479927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-343600-m03"
	I0717 02:08:16.519666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.098µs"
	I0717 02:08:16.544360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.099µs"
	I0717 02:08:19.303837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.225114ms"
	I0717 02:08:19.305728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.099µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:04:22 multinode-343600 kubelet[2292]: E0717 02:04:22.212987    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:04:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:05:22 multinode-343600 kubelet[2292]: E0717 02:05:22.206921    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:05:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:05:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:05:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:05:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:06:22 multinode-343600 kubelet[2292]: E0717 02:06:22.202650    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:07:22 multinode-343600 kubelet[2292]: E0717 02:07:22.201857    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:08:22 multinode-343600 kubelet[2292]: E0717 02:08:22.202745    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:09:07.141425    2096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (12.6989606s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (273.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (70.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status --output json --alsologtostderr: exit status 2 (35.8447967s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-343600","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-343600-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-343600-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:09:40.186555    1432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 19:09:40.194304    1432 out.go:291] Setting OutFile to fd 688 ...
	I0716 19:09:40.195127    1432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:09:40.195127    1432 out.go:304] Setting ErrFile to fd 992...
	I0716 19:09:40.195127    1432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:09:40.216017    1432 out.go:298] Setting JSON to true
	I0716 19:09:40.216017    1432 mustload.go:65] Loading cluster: multinode-343600
	I0716 19:09:40.216017    1432 notify.go:220] Checking for updates...
	I0716 19:09:40.216915    1432 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 19:09:40.216915    1432 status.go:255] checking status of multinode-343600 ...
	I0716 19:09:40.217631    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:09:42.423653    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:09:42.423754    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:42.423932    1432 status.go:330] multinode-343600 host status = "Running" (err=<nil>)
	I0716 19:09:42.424030    1432 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:09:42.425288    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:09:44.637822    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:09:44.637822    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:44.638040    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:09:47.233648    1432 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:09:47.233648    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:47.234395    1432 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:09:47.248266    1432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:09:47.248396    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:09:49.404704    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:09:49.404775    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:49.404847    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:09:51.975333    1432 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:09:51.976133    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:51.976216    1432 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 19:09:52.077853    1432 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8295703s)
	I0716 19:09:52.092140    1432 ssh_runner.go:195] Run: systemctl --version
	I0716 19:09:52.116098    1432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:09:52.144360    1432 kubeconfig.go:125] found "multinode-343600" server: "https://172.27.170.61:8443"
	I0716 19:09:52.144890    1432 api_server.go:166] Checking apiserver status ...
	I0716 19:09:52.158633    1432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 19:09:52.198804    1432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup
	W0716 19:09:52.219489    1432 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 19:09:52.232248    1432 ssh_runner.go:195] Run: ls
	I0716 19:09:52.240127    1432 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 19:09:52.247350    1432 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 19:09:52.247580    1432 status.go:422] multinode-343600 apiserver status = Running (err=<nil>)
	I0716 19:09:52.247643    1432 status.go:257] multinode-343600 status: &{Name:multinode-343600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:09:52.247643    1432 status.go:255] checking status of multinode-343600-m02 ...
	I0716 19:09:52.248893    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:09:54.428223    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:09:54.428499    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:54.428499    1432 status.go:330] multinode-343600-m02 host status = "Running" (err=<nil>)
	I0716 19:09:54.428499    1432 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:09:54.429326    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:09:56.615717    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:09:56.615717    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:56.616586    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:09:59.215883    1432 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:09:59.215961    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:09:59.215961    1432 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:09:59.229058    1432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:09:59.230117    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:10:01.405790    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:10:01.406447    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:01.406447    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:10:03.935375    1432 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:10:03.935375    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:03.935375    1432 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 19:10:04.039357    1432 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8092241s)
	I0716 19:10:04.053339    1432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:10:04.081806    1432 status.go:257] multinode-343600-m02 status: &{Name:multinode-343600-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:10:04.081879    1432 status.go:255] checking status of multinode-343600-m03 ...
	I0716 19:10:04.082750    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:10:06.252391    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:10:06.252391    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:06.252391    1432 status.go:330] multinode-343600-m03 host status = "Running" (err=<nil>)
	I0716 19:10:06.252391    1432 host.go:66] Checking if "multinode-343600-m03" exists ...
	I0716 19:10:06.253086    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:10:08.453815    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:10:08.453815    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:08.453815    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:10:11.023663    1432 main.go:141] libmachine: [stdout =====>] : 172.27.173.202
	
	I0716 19:10:11.023663    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:11.024540    1432 host.go:66] Checking if "multinode-343600-m03" exists ...
	I0716 19:10:11.038457    1432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:10:11.038457    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:10:13.193607    1432 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:10:13.193607    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:13.193906    1432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:10:15.753075    1432 main.go:141] libmachine: [stdout =====>] : 172.27.173.202
	
	I0716 19:10:15.753075    1432 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:10:15.753619    1432 sshutil.go:53] new ssh client: &{IP:172.27.173.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m03\id_rsa Username:docker}
	I0716 19:10:15.854587    1432 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8161136s)
	I0716 19:10:15.868563    1432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:10:15.894981    1432 status.go:257] multinode-343600-m03 status: &{Name:multinode-343600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-343600 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.1588639s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.4997408s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-343600                               | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:44 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- apply -f                   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT | 16 Jul 24 18:52 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- rollout                    | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | busybox-fc5497c4f-9zzvz                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-9zzvz -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| node    | add -p multinode-343600 -v 3                      | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:08 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              22 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         22 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         23 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         23 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         23 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	[INFO] 10.244.0.3:49356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333991s
	[INFO] 10.244.0.3:39090 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137797s
	[INFO] 10.244.0.3:50560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244893s
	[INFO] 10.244.0.3:44091 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164296s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 23m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                22m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	Name:               multinode-343600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T19_07_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:07:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:07:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:08:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.173.202
	  Hostname:    multinode-343600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c97ec282efd48b88cab0b67f2c8f7c2
	  System UUID:                bad18aee-b3d1-0c44-b82f-1f20fb05d065
	  Boot ID:                    33c029cd-4782-43da-a050-56424fd1feae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwt6c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-ghs2x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m49s
	  kube-system                 kube-proxy-4bg7x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m45s                  node-controller  Node multinode-343600-m03 event: Registered Node multinode-343600-m03 in Controller
	  Normal  NodeReady                2m20s                  kubelet          Node multinode-343600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T02:05:13.843808Z","caller":"traceutil/trace.go:171","msg":"trace[1739602045] linearizableReadLoop","detail":"{readStateIndex:1507; appliedIndex:1506; }","duration":"107.913433ms","start":"2024-07-17T02:05:13.735876Z","end":"2024-07-17T02:05:13.84379Z","steps":["trace[1739602045] 'read index received'  (duration: 107.540343ms)","trace[1739602045] 'applied index is now lower than readState.Index'  (duration: 372.39µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:05:13.844005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.068229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:05:13.844085Z","caller":"traceutil/trace.go:171","msg":"trace[1309265040] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1280; }","duration":"108.230624ms","start":"2024-07-17T02:05:13.735844Z","end":"2024-07-17T02:05:13.844075Z","steps":["trace[1309265040] 'agreement among raft nodes before linearized reading'  (duration: 108.040129ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:05:13.84481Z","caller":"traceutil/trace.go:171","msg":"trace[1249349102] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"172.038629ms","start":"2024-07-17T02:05:13.672761Z","end":"2024-07-17T02:05:13.8448Z","steps":["trace[1249349102] 'process raft request'  (duration: 170.732764ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:05:18.090986Z","caller":"traceutil/trace.go:171","msg":"trace[486786045] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"108.572613ms","start":"2024-07-17T02:05:17.982392Z","end":"2024-07-17T02:05:18.090964Z","steps":["trace[486786045] 'process raft request'  (duration: 108.31692ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:16.649225Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2024-07-17T02:07:16.65943Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1137,"took":"9.63174ms","hash":61041692,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-17T02:07:16.659558Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":61041692,"revision":1137,"compact-revision":895}
	{"level":"info","ts":"2024-07-17T02:07:51.533931Z","caller":"traceutil/trace.go:171","msg":"trace[462829157] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"230.454648ms","start":"2024-07-17T02:07:51.303457Z","end":"2024-07-17T02:07:51.533912Z","steps":["trace[462829157] 'process raft request'  (duration: 230.337651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.534107Z","caller":"traceutil/trace.go:171","msg":"trace[2024600941] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1700; }","duration":"209.685912ms","start":"2024-07-17T02:07:51.324411Z","end":"2024-07-17T02:07:51.534097Z","steps":["trace[2024600941] 'read index received'  (duration: 209.681812ms)","trace[2024600941] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.534885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.788109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T02:07:51.53521Z","caller":"traceutil/trace.go:171","msg":"trace[1749208603] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1438; }","duration":"210.773183ms","start":"2024-07-17T02:07:51.324407Z","end":"2024-07-17T02:07:51.53518Z","steps":["trace[1749208603] 'agreement among raft nodes before linearized reading'  (duration: 209.719411ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.684235Z","caller":"traceutil/trace.go:171","msg":"trace[1696915811] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"315.91493ms","start":"2024-07-17T02:07:51.3683Z","end":"2024-07-17T02:07:51.684215Z","steps":["trace[1696915811] 'process raft request'  (duration: 269.338893ms)","trace[1696915811] 'compare'  (duration: 46.000452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.684483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.073221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:07:51.684879Z","caller":"traceutil/trace.go:171","msg":"trace[788779948] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1440; }","duration":"154.559007ms","start":"2024-07-17T02:07:51.530309Z","end":"2024-07-17T02:07:51.684868Z","steps":["trace[788779948] 'agreement among raft nodes before linearized reading'  (duration: 153.972223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:07:51.686157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T02:07:51.368284Z","time spent":"316.016028ms","remote":"127.0.0.1:54094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-343600-m03\" mod_revision:1435 > success:<request_put:<key:\"/registry/minions/multinode-343600-m03\" value_size:2787 >> failure:<request_range:<key:\"/registry/minions/multinode-343600-m03\" > >"}
	{"level":"info","ts":"2024-07-17T02:07:51.684259Z","caller":"traceutil/trace.go:171","msg":"trace[733279489] linearizableReadLoop","detail":"{readStateIndex:1701; appliedIndex:1700; }","duration":"149.085956ms","start":"2024-07-17T02:07:51.535161Z","end":"2024-07-17T02:07:51.684247Z","steps":["trace[733279489] 'read index received'  (duration: 102.314225ms)","trace[733279489] 'applied index is now lower than readState.Index'  (duration: 46.770731ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:57.933889Z","caller":"traceutil/trace.go:171","msg":"trace[1157037549] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"134.713343ms","start":"2024-07-17T02:07:57.799153Z","end":"2024-07-17T02:07:57.933866Z","steps":["trace[1157037549] 'process raft request'  (duration: 118.150293ms)","trace[1157037549] 'compare'  (duration: 16.437454ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.084008Z","caller":"traceutil/trace.go:171","msg":"trace[861469173] transaction","detail":"{read_only:false; response_revision:1449; number_of_response:1; }","duration":"191.891891ms","start":"2024-07-17T02:07:57.892075Z","end":"2024-07-17T02:07:58.083967Z","steps":["trace[861469173] 'process raft request'  (duration: 162.879779ms)","trace[861469173] 'compare'  (duration: 28.877116ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.281477Z","caller":"traceutil/trace.go:171","msg":"trace[1029922395] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"152.699855ms","start":"2024-07-17T02:07:58.128759Z","end":"2024-07-17T02:07:58.281459Z","steps":["trace[1029922395] 'process raft request'  (duration: 73.524105ms)","trace[1029922395] 'compare'  (duration: 78.894858ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:08:02.438563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.888134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-343600-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-17T02:08:02.438671Z","caller":"traceutil/trace.go:171","msg":"trace[1739914459] range","detail":"{range_begin:/registry/minions/multinode-343600-m03; range_end:; response_count:1; response_revision:1459; }","duration":"183.056129ms","start":"2024-07-17T02:08:02.255602Z","end":"2024-07-17T02:08:02.438658Z","steps":["trace[1739914459] 'range keys from in-memory index tree'  (duration: 182.583642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:08:02.438582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.136257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-17T02:08:02.439152Z","caller":"traceutil/trace.go:171","msg":"trace[89915440] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1459; }","duration":"134.726841ms","start":"2024-07-17T02:08:02.304415Z","end":"2024-07-17T02:08:02.439141Z","steps":["trace[89915440] 'range keys from in-memory index tree'  (duration: 133.989162ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:08:02.583228Z","caller":"traceutil/trace.go:171","msg":"trace[1380485395] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"136.847484ms","start":"2024-07-17T02:08:02.44636Z","end":"2024-07-17T02:08:02.583207Z","steps":["trace[1380485395] 'process raft request'  (duration: 136.606391ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:10:36 up 25 min,  0 users,  load average: 0.45, 0.44, 0.34
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:09:34.275150       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:09:44.273187       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:09:44.273465       1 main.go:303] handling current node
	I0717 02:09:44.277485       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:09:44.277667       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:09:54.279831       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:09:54.279939       1 main.go:303] handling current node
	I0717 02:09:54.279977       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:09:54.279991       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:10:04.271650       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:10:04.272107       1 main.go:303] handling current node
	I0717 02:10:04.272220       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:10:04.272310       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:10:14.271786       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:10:14.271913       1 main.go:303] handling current node
	I0717 02:10:14.271936       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:10:14.271946       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:10:24.280871       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:10:24.280945       1 main.go:303] handling current node
	I0717 02:10:24.280963       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:10:24.280970       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:10:34.276981       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:10:34.277089       1 main.go:303] handling current node
	I0717 02:10:34.277109       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:10:34.277362       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	E0717 02:04:11.454010       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49860: use of closed network connection
	E0717 02:04:21.903848       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49862: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:37.358351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="245.084µs"
	I0717 01:47:37.775077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.40057ms"
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0717 02:07:47.450050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-343600-m03\" does not exist"
	I0717 02:07:47.466038       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-343600-m03" podCIDRs=["10.244.1.0/24"]
	I0717 02:07:51.299816       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-343600-m03"
	I0717 02:08:16.479927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-343600-m03"
	I0717 02:08:16.519666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.098µs"
	I0717 02:08:16.544360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.099µs"
	I0717 02:08:19.303837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.225114ms"
	I0717 02:08:19.305728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.099µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:06:22 multinode-343600 kubelet[2292]: E0717 02:06:22.202650    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:06:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:07:22 multinode-343600 kubelet[2292]: E0717 02:07:22.201857    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:07:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:08:22 multinode-343600 kubelet[2292]: E0717 02:08:22.202745    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:09:22 multinode-343600 kubelet[2292]: E0717 02:09:22.204196    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:10:22 multinode-343600 kubelet[2292]: E0717 02:10:22.203113    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:10:28.199298   13964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (12.1467188s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (70.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (120.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 node stop m03
E0716 19:11:05.805899    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 node stop m03: (34.9515183s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status: exit status 7 (25.7539366s)

                                                
                                                
-- stdout --
	multinode-343600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-343600-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-343600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:11:25.255451   13468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr: exit status 7 (25.7656999s)

                                                
                                                
-- stdout --
	multinode-343600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-343600-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-343600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:11:51.005213   14784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 19:11:51.012823   14784 out.go:291] Setting OutFile to fd 248 ...
	I0716 19:11:51.014005   14784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:11:51.014005   14784 out.go:304] Setting ErrFile to fd 648...
	I0716 19:11:51.014090   14784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:11:51.027938   14784 out.go:298] Setting JSON to false
	I0716 19:11:51.027938   14784 mustload.go:65] Loading cluster: multinode-343600
	I0716 19:11:51.027938   14784 notify.go:220] Checking for updates...
	I0716 19:11:51.028946   14784 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 19:11:51.028946   14784 status.go:255] checking status of multinode-343600 ...
	I0716 19:11:51.029935   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:11:53.188993   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:11:53.189050   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:11:53.189050   14784 status.go:330] multinode-343600 host status = "Running" (err=<nil>)
	I0716 19:11:53.189050   14784 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:11:53.189872   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:11:55.375369   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:11:55.375865   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:11:55.375942   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:11:57.919306   14784 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:11:57.919370   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:11:57.919370   14784 host.go:66] Checking if "multinode-343600" exists ...
	I0716 19:11:57.933038   14784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:11:57.933038   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 19:12:00.067853   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:12:00.067929   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:00.068030   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 19:12:02.587355   14784 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 19:12:02.587355   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:02.588055   14784 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 19:12:02.685868   14784 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.75273s)
	I0716 19:12:02.702400   14784 ssh_runner.go:195] Run: systemctl --version
	I0716 19:12:02.728638   14784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:12:02.762008   14784 kubeconfig.go:125] found "multinode-343600" server: "https://172.27.170.61:8443"
	I0716 19:12:02.762111   14784 api_server.go:166] Checking apiserver status ...
	I0716 19:12:02.775204   14784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 19:12:02.812854   14784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup
	W0716 19:12:02.829179   14784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0716 19:12:02.842342   14784 ssh_runner.go:195] Run: ls
	I0716 19:12:02.849053   14784 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 19:12:02.855728   14784 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 19:12:02.855728   14784 status.go:422] multinode-343600 apiserver status = Running (err=<nil>)
	I0716 19:12:02.855728   14784 status.go:257] multinode-343600 status: &{Name:multinode-343600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:12:02.855728   14784 status.go:255] checking status of multinode-343600-m02 ...
	I0716 19:12:02.856447   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:12:04.999481   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:12:04.999481   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:04.999481   14784 status.go:330] multinode-343600-m02 host status = "Running" (err=<nil>)
	I0716 19:12:04.999718   14784 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:12:05.000698   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:12:07.181011   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:12:07.181299   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:07.181436   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:12:09.719608   14784 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:12:09.719608   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:09.719608   14784 host.go:66] Checking if "multinode-343600-m02" exists ...
	I0716 19:12:09.732338   14784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0716 19:12:09.732338   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 19:12:11.849080   14784 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:12:11.849250   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:11.849250   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 19:12:14.381094   14784 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 19:12:14.381094   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:14.382011   14784 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 19:12:14.475960   14784 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7436058s)
	I0716 19:12:14.489360   14784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 19:12:14.524191   14784 status.go:257] multinode-343600-m02 status: &{Name:multinode-343600-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0716 19:12:14.524191   14784 status.go:255] checking status of multinode-343600-m03 ...
	I0716 19:12:14.525993   14784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:12:16.628696   14784 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 19:12:16.629078   14784 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:16.629078   14784 status.go:330] multinode-343600-m03 host status = "Stopped" (err=<nil>)
	I0716 19:12:16.629078   14784 status.go:343] host is not running, skipping remaining checks
	I0716 19:12:16.629078   14784 status.go:257] multinode-343600-m03 status: &{Name:multinode-343600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr": multinode-343600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-343600-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-343600-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-343600 status --alsologtostderr": multinode-343600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-343600-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-343600-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.0843761s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.1978401s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-343600 -- apply -f                   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT | 16 Jul 24 18:52 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- rollout                    | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o                | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | busybox-fc5497c4f-9zzvz                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-9zzvz -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec                       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| node    | add -p multinode-343600 -v 3                      | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:08 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-343600 node stop m03                    | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:10 PDT | 16 Jul 24 19:11 PDT |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              24 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         24 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         25 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         25 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         25 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	[INFO] 10.244.0.3:49356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333991s
	[INFO] 10.244.0.3:39090 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137797s
	[INFO] 10.244.0.3:50560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244893s
	[INFO] 10.244.0.3:44091 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164296s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:12:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:44 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  Starting                 25m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                24m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	Name:               multinode-343600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T19_07_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:07:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:11:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.173.202
	  Hostname:    multinode-343600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c97ec282efd48b88cab0b67f2c8f7c2
	  System UUID:                bad18aee-b3d1-0c44-b82f-1f20fb05d065
	  Boot ID:                    33c029cd-4782-43da-a050-56424fd1feae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwt6c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-ghs2x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m49s
	  kube-system                 kube-proxy-4bg7x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m49s (x2 over 4m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x2 over 4m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x2 over 4m49s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m45s                  node-controller  Node multinode-343600-m03 event: Registered Node multinode-343600-m03 in Controller
	  Normal  NodeReady                4m20s                  kubelet          Node multinode-343600-m03 status is now: NodeReady
	  Normal  NodeNotReady             55s                    node-controller  Node multinode-343600-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T02:07:51.533931Z","caller":"traceutil/trace.go:171","msg":"trace[462829157] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"230.454648ms","start":"2024-07-17T02:07:51.303457Z","end":"2024-07-17T02:07:51.533912Z","steps":["trace[462829157] 'process raft request'  (duration: 230.337651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.534107Z","caller":"traceutil/trace.go:171","msg":"trace[2024600941] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1700; }","duration":"209.685912ms","start":"2024-07-17T02:07:51.324411Z","end":"2024-07-17T02:07:51.534097Z","steps":["trace[2024600941] 'read index received'  (duration: 209.681812ms)","trace[2024600941] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.534885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.788109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T02:07:51.53521Z","caller":"traceutil/trace.go:171","msg":"trace[1749208603] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1438; }","duration":"210.773183ms","start":"2024-07-17T02:07:51.324407Z","end":"2024-07-17T02:07:51.53518Z","steps":["trace[1749208603] 'agreement among raft nodes before linearized reading'  (duration: 209.719411ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.684235Z","caller":"traceutil/trace.go:171","msg":"trace[1696915811] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"315.91493ms","start":"2024-07-17T02:07:51.3683Z","end":"2024-07-17T02:07:51.684215Z","steps":["trace[1696915811] 'process raft request'  (duration: 269.338893ms)","trace[1696915811] 'compare'  (duration: 46.000452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.684483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.073221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:07:51.684879Z","caller":"traceutil/trace.go:171","msg":"trace[788779948] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1440; }","duration":"154.559007ms","start":"2024-07-17T02:07:51.530309Z","end":"2024-07-17T02:07:51.684868Z","steps":["trace[788779948] 'agreement among raft nodes before linearized reading'  (duration: 153.972223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:07:51.686157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T02:07:51.368284Z","time spent":"316.016028ms","remote":"127.0.0.1:54094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-343600-m03\" mod_revision:1435 > success:<request_put:<key:\"/registry/minions/multinode-343600-m03\" value_size:2787 >> failure:<request_range:<key:\"/registry/minions/multinode-343600-m03\" > >"}
	{"level":"info","ts":"2024-07-17T02:07:51.684259Z","caller":"traceutil/trace.go:171","msg":"trace[733279489] linearizableReadLoop","detail":"{readStateIndex:1701; appliedIndex:1700; }","duration":"149.085956ms","start":"2024-07-17T02:07:51.535161Z","end":"2024-07-17T02:07:51.684247Z","steps":["trace[733279489] 'read index received'  (duration: 102.314225ms)","trace[733279489] 'applied index is now lower than readState.Index'  (duration: 46.770731ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:57.933889Z","caller":"traceutil/trace.go:171","msg":"trace[1157037549] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"134.713343ms","start":"2024-07-17T02:07:57.799153Z","end":"2024-07-17T02:07:57.933866Z","steps":["trace[1157037549] 'process raft request'  (duration: 118.150293ms)","trace[1157037549] 'compare'  (duration: 16.437454ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.084008Z","caller":"traceutil/trace.go:171","msg":"trace[861469173] transaction","detail":"{read_only:false; response_revision:1449; number_of_response:1; }","duration":"191.891891ms","start":"2024-07-17T02:07:57.892075Z","end":"2024-07-17T02:07:58.083967Z","steps":["trace[861469173] 'process raft request'  (duration: 162.879779ms)","trace[861469173] 'compare'  (duration: 28.877116ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.281477Z","caller":"traceutil/trace.go:171","msg":"trace[1029922395] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"152.699855ms","start":"2024-07-17T02:07:58.128759Z","end":"2024-07-17T02:07:58.281459Z","steps":["trace[1029922395] 'process raft request'  (duration: 73.524105ms)","trace[1029922395] 'compare'  (duration: 78.894858ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:08:02.438563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.888134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-343600-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-17T02:08:02.438671Z","caller":"traceutil/trace.go:171","msg":"trace[1739914459] range","detail":"{range_begin:/registry/minions/multinode-343600-m03; range_end:; response_count:1; response_revision:1459; }","duration":"183.056129ms","start":"2024-07-17T02:08:02.255602Z","end":"2024-07-17T02:08:02.438658Z","steps":["trace[1739914459] 'range keys from in-memory index tree'  (duration: 182.583642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:08:02.438582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.136257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-17T02:08:02.439152Z","caller":"traceutil/trace.go:171","msg":"trace[89915440] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1459; }","duration":"134.726841ms","start":"2024-07-17T02:08:02.304415Z","end":"2024-07-17T02:08:02.439141Z","steps":["trace[89915440] 'range keys from in-memory index tree'  (duration: 133.989162ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:08:02.583228Z","caller":"traceutil/trace.go:171","msg":"trace[1380485395] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"136.847484ms","start":"2024-07-17T02:08:02.44636Z","end":"2024-07-17T02:08:02.583207Z","steps":["trace[1380485395] 'process raft request'  (duration: 136.606391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:24.483596Z","caller":"traceutil/trace.go:171","msg":"trace[182214649] transaction","detail":"{read_only:false; response_revision:1658; number_of_response:1; }","duration":"179.381042ms","start":"2024-07-17T02:11:24.304195Z","end":"2024-07-17T02:11:24.483576Z","steps":["trace[182214649] 'process raft request'  (duration: 179.23744ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:25.634418Z","caller":"traceutil/trace.go:171","msg":"trace[1300292607] linearizableReadLoop","detail":"{readStateIndex:1964; appliedIndex:1963; }","duration":"103.613334ms","start":"2024-07-17T02:11:25.530788Z","end":"2024-07-17T02:11:25.634401Z","steps":["trace[1300292607] 'read index received'  (duration: 103.552533ms)","trace[1300292607] 'applied index is now lower than readState.Index'  (duration: 60.201µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:11:25.634824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.037741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:11:25.634917Z","caller":"traceutil/trace.go:171","msg":"trace[1757730791] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1659; }","duration":"104.269544ms","start":"2024-07-17T02:11:25.530637Z","end":"2024-07-17T02:11:25.634907Z","steps":["trace[1757730791] 'agreement among raft nodes before linearized reading'  (duration: 103.955939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:25.635118Z","caller":"traceutil/trace.go:171","msg":"trace[1848997321] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"162.547863ms","start":"2024-07-17T02:11:25.472557Z","end":"2024-07-17T02:11:25.635105Z","steps":["trace[1848997321] 'process raft request'  (duration: 161.70205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:12:16.670261Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1378}
	{"level":"info","ts":"2024-07-17T02:12:16.680696Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1378,"took":"9.552517ms","hash":629436316,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1712128,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-17T02:12:16.680812Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":629436316,"revision":1378,"compact-revision":1137}
	
	
	==> kernel <==
	 02:12:36 up 27 min,  0 users,  load average: 0.36, 0.37, 0.33
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:11:34.275386       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:11:44.272036       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:11:44.272286       1 main.go:303] handling current node
	I0717 02:11:44.272359       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:11:44.272543       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:11:54.277299       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:11:54.277581       1 main.go:303] handling current node
	I0717 02:11:54.277730       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:11:54.277743       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:12:04.275189       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:12:04.275451       1 main.go:303] handling current node
	I0717 02:12:04.275841       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:12:04.276063       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:12:14.271745       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:12:14.271850       1 main.go:303] handling current node
	I0717 02:12:14.271871       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:12:14.271878       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:12:24.275887       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:12:24.275939       1 main.go:303] handling current node
	I0717 02:12:24.275957       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:12:24.275963       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:12:34.276026       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:12:34.276155       1 main.go:303] handling current node
	I0717 02:12:34.276175       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:12:34.276183       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	E0717 02:04:11.454010       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49860: use of closed network connection
	E0717 02:04:21.903848       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49862: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0717 02:07:47.450050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-343600-m03\" does not exist"
	I0717 02:07:47.466038       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-343600-m03" podCIDRs=["10.244.1.0/24"]
	I0717 02:07:51.299816       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-343600-m03"
	I0717 02:08:16.479927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-343600-m03"
	I0717 02:08:16.519666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.098µs"
	I0717 02:08:16.544360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.099µs"
	I0717 02:08:19.303837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.225114ms"
	I0717 02:08:19.305728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.099µs"
	I0717 02:11:41.458932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.766706ms"
	I0717 02:11:41.459469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.503µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:08:22 multinode-343600 kubelet[2292]: E0717 02:08:22.202745    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:08:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:09:22 multinode-343600 kubelet[2292]: E0717 02:09:22.204196    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:09:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:10:22 multinode-343600 kubelet[2292]: E0717 02:10:22.203113    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:11:22 multinode-343600 kubelet[2292]: E0717 02:11:22.204341    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:12:22 multinode-343600 kubelet[2292]: E0717 02:12:22.201086    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:12:28.855563    1840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (12.0911013s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (120.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (164.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 node start m03 -v=7 --alsologtostderr
E0716 19:14:00.822130    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 node start m03 -v=7 --alsologtostderr: exit status 1 (1m25.6034917s)

                                                
                                                
-- stdout --
	* Starting "multinode-343600-m03" worker node in "multinode-343600" cluster
	* Restarting existing hyperv VM for "multinode-343600-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:12:50.597936   15312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 19:12:50.606066   15312 out.go:291] Setting OutFile to fd 960 ...
	I0716 19:12:50.606817   15312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:12:50.606817   15312 out.go:304] Setting ErrFile to fd 1020...
	I0716 19:12:50.606817   15312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 19:12:50.626262   15312 mustload.go:65] Loading cluster: multinode-343600
	I0716 19:12:50.627034   15312 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 19:12:50.627917   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:12:52.814097   15312 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 19:12:52.814097   15312 main.go:141] libmachine: [stderr =====>] : 
	W0716 19:12:52.814097   15312 host.go:58] "multinode-343600-m03" host status: Stopped
	I0716 19:12:52.818010   15312 out.go:177] * Starting "multinode-343600-m03" worker node in "multinode-343600" cluster
	I0716 19:12:52.821218   15312 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 19:12:52.821512   15312 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 19:12:52.821512   15312 cache.go:56] Caching tarball of preloaded images
	I0716 19:12:52.821591   15312 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 19:12:52.822243   15312 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 19:12:52.822292   15312 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 19:12:52.825042   15312 start.go:360] acquireMachinesLock for multinode-343600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 19:12:52.825042   15312 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600-m03"
	I0716 19:12:52.825042   15312 start.go:96] Skipping create...Using existing machine configuration
	I0716 19:12:52.825042   15312 fix.go:54] fixHost starting: m03
	I0716 19:12:52.825886   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:12:54.972255   15312 main.go:141] libmachine: [stdout =====>] : Off
	
	I0716 19:12:54.972255   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:54.972255   15312 fix.go:112] recreateIfNeeded on multinode-343600-m03: state=Stopped err=<nil>
	W0716 19:12:54.972255   15312 fix.go:138] unexpected machine state, will restart: <nil>
	I0716 19:12:54.975333   15312 out.go:177] * Restarting existing hyperv VM for "multinode-343600-m03" ...
	I0716 19:12:54.979136   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m03
	I0716 19:12:58.318852   15312 main.go:141] libmachine: [stdout =====>] : 
	I0716 19:12:58.318852   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:12:58.319094   15312 main.go:141] libmachine: Waiting for host to start...
	I0716 19:12:58.319167   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:00.585288   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:00.585288   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:00.586303   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:03.186006   15312 main.go:141] libmachine: [stdout =====>] : 
	I0716 19:13:03.186727   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:04.197178   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:06.405370   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:06.405534   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:06.405651   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:08.985201   15312 main.go:141] libmachine: [stdout =====>] : 
	I0716 19:13:08.985338   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:10.000548   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:12.220393   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:12.220393   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:12.220393   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:14.744586   15312 main.go:141] libmachine: [stdout =====>] : 
	I0716 19:13:14.744675   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:15.755678   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:18.024421   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:18.024574   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:18.024703   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:20.575865   15312 main.go:141] libmachine: [stdout =====>] : 
	I0716 19:13:20.576400   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:21.587207   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:23.818100   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:23.818334   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:23.818415   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:26.403703   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:26.403703   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:26.407072   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:28.542148   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:28.542148   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:28.542435   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:31.103416   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:31.103416   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:31.103886   15312 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 19:13:31.106395   15312 machine.go:94] provisionDockerMachine start ...
	I0716 19:13:31.106527   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:33.255162   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:33.256093   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:33.256093   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:35.729584   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:35.729584   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:35.735190   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:13:35.735916   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:13:35.735916   15312 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 19:13:35.854309   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 19:13:35.854309   15312 buildroot.go:166] provisioning hostname "multinode-343600-m03"
	I0716 19:13:35.854309   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:38.023113   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:38.023113   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:38.023610   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:40.604957   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:40.604957   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:40.611919   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:13:40.611919   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:13:40.611919   15312 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m03 && echo "multinode-343600-m03" | sudo tee /etc/hostname
	I0716 19:13:40.763750   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m03
	
	I0716 19:13:40.763855   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:42.863918   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:42.863918   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:42.864643   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:45.411345   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:45.411345   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:45.417559   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:13:45.418220   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:13:45.418220   15312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 19:13:45.554583   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 19:13:45.554711   15312 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 19:13:45.554775   15312 buildroot.go:174] setting up certificates
	I0716 19:13:45.554869   15312 provision.go:84] configureAuth start
	I0716 19:13:45.555028   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:47.694917   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:47.694917   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:47.694991   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:50.295543   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:50.295543   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:50.296227   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:52.511113   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:52.511113   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:52.512114   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:55.110732   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:55.110732   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:55.111409   15312 provision.go:143] copyHostCerts
	I0716 19:13:55.111682   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 19:13:55.111682   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 19:13:55.111682   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 19:13:55.112413   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 19:13:55.112936   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 19:13:55.113926   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 19:13:55.113926   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 19:13:55.113926   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 19:13:55.115313   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 19:13:55.115686   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 19:13:55.115686   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 19:13:55.116245   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 19:13:55.117122   15312 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m03 san=[127.0.0.1 172.27.165.51 localhost minikube multinode-343600-m03]
	I0716 19:13:55.204896   15312 provision.go:177] copyRemoteCerts
	I0716 19:13:55.221077   15312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 19:13:55.221267   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:13:57.415195   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:13:57.415195   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:57.415195   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:13:59.918402   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:13:59.918402   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:13:59.919553   15312 sshutil.go:53] new ssh client: &{IP:172.27.165.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m03\id_rsa Username:docker}
	I0716 19:14:00.016435   15312 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7953415s)
	I0716 19:14:00.016640   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 19:14:00.017257   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 19:14:00.072092   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 19:14:00.072588   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 19:14:00.122579   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 19:14:00.123039   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 19:14:00.168165   15312 provision.go:87] duration metric: took 14.6131091s to configureAuth
	I0716 19:14:00.168165   15312 buildroot.go:189] setting minikube options for container-runtime
	I0716 19:14:00.168895   15312 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 19:14:00.168953   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:14:02.346788   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:14:02.347507   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:02.347683   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:14:04.924623   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:14:04.925462   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:04.931479   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:14:04.932079   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:14:04.932079   15312 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 19:14:05.066216   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 19:14:05.066216   15312 buildroot.go:70] root file system type: tmpfs
	I0716 19:14:05.066216   15312 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 19:14:05.066740   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:14:07.229135   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:14:07.229652   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:07.229856   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:14:09.762359   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:14:09.762359   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:09.769527   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:14:09.770090   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:14:09.770266   15312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 19:14:09.930258   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 19:14:09.930364   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
	I0716 19:14:12.053388   15312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 19:14:12.053388   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:12.053754   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
	I0716 19:14:14.543015   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51
	
	I0716 19:14:14.543015   15312 main.go:141] libmachine: [stderr =====>] : 
	I0716 19:14:14.549710   15312 main.go:141] libmachine: Using SSH client type: native
	I0716 19:14:14.549969   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
	I0716 19:14:14.549969   15312 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }

                                                
                                                
** /stderr **
multinode_test.go:284: W0716 19:12:50.597936   15312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 19:12:50.606066   15312 out.go:291] Setting OutFile to fd 960 ...
I0716 19:12:50.606817   15312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 19:12:50.606817   15312 out.go:304] Setting ErrFile to fd 1020...
I0716 19:12:50.606817   15312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 19:12:50.626262   15312 mustload.go:65] Loading cluster: multinode-343600
I0716 19:12:50.627034   15312 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 19:12:50.627917   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:12:52.814097   15312 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0716 19:12:52.814097   15312 main.go:141] libmachine: [stderr =====>] : 
W0716 19:12:52.814097   15312 host.go:58] "multinode-343600-m03" host status: Stopped
I0716 19:12:52.818010   15312 out.go:177] * Starting "multinode-343600-m03" worker node in "multinode-343600" cluster
I0716 19:12:52.821218   15312 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0716 19:12:52.821512   15312 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0716 19:12:52.821512   15312 cache.go:56] Caching tarball of preloaded images
I0716 19:12:52.821591   15312 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0716 19:12:52.822243   15312 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0716 19:12:52.822292   15312 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
I0716 19:12:52.825042   15312 start.go:360] acquireMachinesLock for multinode-343600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0716 19:12:52.825042   15312 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600-m03"
I0716 19:12:52.825042   15312 start.go:96] Skipping create...Using existing machine configuration
I0716 19:12:52.825042   15312 fix.go:54] fixHost starting: m03
I0716 19:12:52.825886   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:12:54.972255   15312 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0716 19:12:54.972255   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:12:54.972255   15312 fix.go:112] recreateIfNeeded on multinode-343600-m03: state=Stopped err=<nil>
W0716 19:12:54.972255   15312 fix.go:138] unexpected machine state, will restart: <nil>
I0716 19:12:54.975333   15312 out.go:177] * Restarting existing hyperv VM for "multinode-343600-m03" ...
I0716 19:12:54.979136   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m03
I0716 19:12:58.318852   15312 main.go:141] libmachine: [stdout =====>] : 
I0716 19:12:58.318852   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:12:58.319094   15312 main.go:141] libmachine: Waiting for host to start...
I0716 19:12:58.319167   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:00.585288   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:00.585288   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:00.586303   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:03.186006   15312 main.go:141] libmachine: [stdout =====>] : 
I0716 19:13:03.186727   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:04.197178   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:06.405370   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:06.405534   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:06.405651   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:08.985201   15312 main.go:141] libmachine: [stdout =====>] : 
I0716 19:13:08.985338   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:10.000548   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:12.220393   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:12.220393   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:12.220393   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:14.744586   15312 main.go:141] libmachine: [stdout =====>] : 
I0716 19:13:14.744675   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:15.755678   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:18.024421   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:18.024574   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:18.024703   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:20.575865   15312 main.go:141] libmachine: [stdout =====>] : 
I0716 19:13:20.576400   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:21.587207   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:23.818100   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:23.818334   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:23.818415   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:26.403703   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:26.403703   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:26.407072   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:28.542148   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:28.542148   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:28.542435   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:31.103416   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:31.103416   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:31.103886   15312 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
I0716 19:13:31.106395   15312 machine.go:94] provisionDockerMachine start ...
I0716 19:13:31.106527   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:33.255162   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:33.256093   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:33.256093   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:35.729584   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:35.729584   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:35.735190   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:13:35.735916   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:13:35.735916   15312 main.go:141] libmachine: About to run SSH command:
hostname
I0716 19:13:35.854309   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0716 19:13:35.854309   15312 buildroot.go:166] provisioning hostname "multinode-343600-m03"
I0716 19:13:35.854309   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:38.023113   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:38.023113   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:38.023610   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:40.604957   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:40.604957   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:40.611919   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:13:40.611919   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:13:40.611919   15312 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-343600-m03 && echo "multinode-343600-m03" | sudo tee /etc/hostname
I0716 19:13:40.763750   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m03

                                                
                                                
I0716 19:13:40.763855   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:42.863918   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:42.863918   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:42.864643   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:45.411345   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:45.411345   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:45.417559   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:13:45.418220   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:13:45.418220   15312 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-343600-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-343600-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0716 19:13:45.554583   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0716 19:13:45.554711   15312 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0716 19:13:45.554775   15312 buildroot.go:174] setting up certificates
I0716 19:13:45.554869   15312 provision.go:84] configureAuth start
I0716 19:13:45.555028   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:47.694917   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:47.694917   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:47.694991   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:50.295543   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:50.295543   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:50.296227   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:52.511113   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:52.511113   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:52.512114   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:55.110732   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:55.110732   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:55.111409   15312 provision.go:143] copyHostCerts
I0716 19:13:55.111682   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
I0716 19:13:55.111682   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0716 19:13:55.111682   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0716 19:13:55.112413   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0716 19:13:55.112936   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
I0716 19:13:55.113926   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0716 19:13:55.113926   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0716 19:13:55.113926   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0716 19:13:55.115313   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
I0716 19:13:55.115686   15312 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0716 19:13:55.115686   15312 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0716 19:13:55.116245   15312 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0716 19:13:55.117122   15312 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m03 san=[127.0.0.1 172.27.165.51 localhost minikube multinode-343600-m03]
I0716 19:13:55.204896   15312 provision.go:177] copyRemoteCerts
I0716 19:13:55.221077   15312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0716 19:13:55.221267   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:13:57.415195   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:13:57.415195   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:57.415195   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:13:59.918402   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:13:59.918402   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:13:59.919553   15312 sshutil.go:53] new ssh client: &{IP:172.27.165.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m03\id_rsa Username:docker}
I0716 19:14:00.016435   15312 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7953415s)
I0716 19:14:00.016640   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0716 19:14:00.017257   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0716 19:14:00.072092   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0716 19:14:00.072588   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0716 19:14:00.122579   15312 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0716 19:14:00.123039   15312 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0716 19:14:00.168165   15312 provision.go:87] duration metric: took 14.6131091s to configureAuth
I0716 19:14:00.168165   15312 buildroot.go:189] setting minikube options for container-runtime
I0716 19:14:00.168895   15312 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 19:14:00.168953   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:14:02.346788   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:14:02.347507   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:02.347683   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:14:04.924623   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:14:04.925462   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:04.931479   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:14:04.932079   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:14:04.932079   15312 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0716 19:14:05.066216   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0716 19:14:05.066216   15312 buildroot.go:70] root file system type: tmpfs
I0716 19:14:05.066216   15312 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0716 19:14:05.066740   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:14:07.229135   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:14:07.229652   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:07.229856   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:14:09.762359   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:14:09.762359   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:09.769527   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:14:09.770090   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:14:09.770266   15312 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0716 19:14:09.930258   15312 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0716 19:14:09.930364   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m03 ).state
I0716 19:14:12.053388   15312 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 19:14:12.053388   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:12.053754   15312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m03 ).networkadapters[0]).ipaddresses[0]
I0716 19:14:14.543015   15312 main.go:141] libmachine: [stdout =====>] : 172.27.165.51

                                                
                                                
I0716 19:14:14.543015   15312 main.go:141] libmachine: [stderr =====>] : 
I0716 19:14:14.549710   15312 main.go:141] libmachine: Using SSH client type: native
I0716 19:14:14.549969   15312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.165.51 22 <nil> <nil>}
I0716 19:14:14.549969   15312 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-343600 node start m03 -v=7 --alsologtostderr": exit status 1
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (228.1µs)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-343600 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-343600 -n multinode-343600: (12.0339049s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-343600 logs -n 25: (8.4320579s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-343600 -- rollout       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 18:52 PDT |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:02 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:02 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT | 16 Jul 24 19:03 PDT |
	|         | busybox-fc5497c4f-9zzvz -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:03 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- get pods -o   | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:04 PDT |
	|         | busybox-fc5497c4f-9zzvz              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-9zzvz -- sh        |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.160.1            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-343600 -- exec          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT |                     |
	|         | busybox-fc5497c4f-xwt6c              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| node    | add -p multinode-343600 -v 3         | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:04 PDT | 16 Jul 24 19:08 PDT |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-343600 node stop m03       | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:10 PDT | 16 Jul 24 19:11 PDT |
	| node    | multinode-343600 node start          | multinode-343600 | minikube1\jenkins | v1.33.1 | 16 Jul 24 19:12 PDT |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 18:44:16
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 18:44:16.180869    2528 out.go:291] Setting OutFile to fd 688 ...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.181593    2528 out.go:304] Setting ErrFile to fd 984...
	I0716 18:44:16.181593    2528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 18:44:16.205376    2528 out.go:298] Setting JSON to false
	I0716 18:44:16.209441    2528 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22295,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 18:44:16.209441    2528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 18:44:16.213928    2528 out.go:177] * [multinode-343600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 18:44:16.218888    2528 notify.go:220] Checking for updates...
	I0716 18:44:16.220649    2528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:44:16.225672    2528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 18:44:16.228513    2528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 18:44:16.231628    2528 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 18:44:16.233751    2528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 18:44:16.237504    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:44:16.237504    2528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 18:44:21.479230    2528 out.go:177] * Using the hyperv driver based on user configuration
	I0716 18:44:21.483872    2528 start.go:297] selected driver: hyperv
	I0716 18:44:21.484507    2528 start.go:901] validating driver "hyperv" against <nil>
	I0716 18:44:21.484649    2528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0716 18:44:21.540338    2528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 18:44:21.541905    2528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:44:21.541905    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:44:21.541905    2528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0716 18:44:21.541905    2528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0716 18:44:21.541905    2528 start.go:340] cluster config:
	{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:44:21.542595    2528 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 18:44:21.546087    2528 out.go:177] * Starting "multinode-343600" primary control-plane node in "multinode-343600" cluster
	I0716 18:44:21.551043    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:44:21.551043    2528 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 18:44:21.551043    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:44:21.551909    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:44:21.552300    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:44:21.552497    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:44:21.552792    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json: {Name:mkcf20b1713be975d077e7a92a8cdccdc372a384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:360] acquireMachinesLock for multinode-343600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:44:21.553023    2528 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-343600"
	I0716 18:44:21.554160    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:44:21.554160    2528 start.go:125] createHost starting for "" (driver="hyperv")
	I0716 18:44:21.558131    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:44:21.558131    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:44:21.558780    2528 client.go:168] LocalClient.Create starting
	I0716 18:44:21.559396    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.559526    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:44:21.560082    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:44:21.560295    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:44:23.601400    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:23.602371    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:44:25.266018    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:44:25.266502    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:25.266744    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:26.713065    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:26.713467    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:26.713531    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:30.210702    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:30.213459    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:44:30.665581    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: Creating VM...
	I0716 18:44:30.782994    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:44:33.602413    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:33.602733    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:44:33.602887    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:44:35.293509    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:35.293900    2528 main.go:141] libmachine: Creating VHD
	I0716 18:44:35.293962    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:44:39.013774    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6DACE1CA-2CA3-448C-B3FB-7CF917FFE9AB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:44:39.014658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:39.014658    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:44:39.014802    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:44:39.026814    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:42.200303    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:42.200751    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd' -SizeBytes 20000MB
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:45.163381    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:45.163918    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-343600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:44:48.763578    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:48.764387    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600 -DynamicMemoryEnabled $false
	I0716 18:44:50.992666    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:50.992777    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:50.992802    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600 -Count 2
	I0716 18:44:53.156396    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:53.156459    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\boot2docker.iso'
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:55.695327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:55.695616    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\disk.vhd'
	I0716 18:44:58.373919    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:44:58.374470    2528 main.go:141] libmachine: Starting VM...
	I0716 18:44:58.374629    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600
	I0716 18:45:02.165508    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:02.166663    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:45:02.166747    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:04.394449    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:04.395092    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:04.395274    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:06.935973    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:06.936122    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:07.950448    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:10.162222    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:10.162762    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:10.162857    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:12.782713    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:12.782753    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:13.784989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:16.007357    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:16.007447    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:16.007651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:18.561416    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:19.576409    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:21.809082    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:21.809213    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:21.809296    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:45:24.331287    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:25.334154    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:27.550387    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:27.550659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:30.103912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:30.104894    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:32.176999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:32.177332    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:45:32.177439    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:34.346826    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:34.346967    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:36.852260    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:36.852871    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:36.859641    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:36.870466    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:36.870466    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:45:37.006479    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:45:37.006592    2528 buildroot.go:166] provisioning hostname "multinode-343600"
	I0716 18:45:37.006690    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:39.157009    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:39.157250    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:41.731407    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:41.738582    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:41.739181    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:41.739181    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600 && echo "multinode-343600" | sudo tee /etc/hostname
	I0716 18:45:41.902041    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600
	
	I0716 18:45:41.902041    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:44.012824    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:46.462189    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:46.468345    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:45:46.469122    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:45:46.469122    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:45:46.613340    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:45:46.613340    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:45:46.613340    2528 buildroot.go:174] setting up certificates
	I0716 18:45:46.613340    2528 provision.go:84] configureAuth start
	I0716 18:45:46.613340    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:48.723962    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:48.724203    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:51.218754    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:51.218933    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:51.219344    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:53.320343    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:53.320670    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:45:55.807227    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:55.807570    2528 provision.go:143] copyHostCerts
	I0716 18:45:55.807716    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:45:55.808032    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:45:55.808121    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:45:55.808603    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:45:55.809878    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:45:55.810151    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:45:55.810151    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:45:55.810655    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:45:55.811611    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:45:55.811868    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:45:55.811868    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:45:55.812273    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:45:55.813591    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600 san=[127.0.0.1 172.27.170.61 localhost minikube multinode-343600]
	I0716 18:45:56.044623    2528 provision.go:177] copyRemoteCerts
	I0716 18:45:56.060323    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:45:56.060456    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:45:58.159981    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:45:58.160339    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:00.656291    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:00.657205    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:00.657483    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:00.763625    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7031521s)
	I0716 18:46:00.763625    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:46:00.763625    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:46:00.810749    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:46:00.810749    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0716 18:46:00.863397    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:46:00.864005    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:46:00.906827    2528 provision.go:87] duration metric: took 14.2934355s to configureAuth
	I0716 18:46:00.906827    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:46:00.907954    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:46:00.907954    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:02.985659    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:02.985897    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:02.985989    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:05.456038    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:05.462023    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:05.462805    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:05.462805    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:46:05.596553    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:46:05.596749    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:46:05.597063    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:46:05.597221    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:07.705405    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:10.213222    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:10.220315    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:10.220315    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:10.221009    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:46:10.372921    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:46:10.372921    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:12.478884    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:14.994729    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:15.001128    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:15.001630    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:15.001749    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:46:17.257429    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:46:17.257429    2528 machine.go:97] duration metric: took 45.079935s to provisionDockerMachine
	I0716 18:46:17.257429    2528 client.go:171] duration metric: took 1m55.6981414s to LocalClient.Create
	I0716 18:46:17.257429    2528 start.go:167] duration metric: took 1m55.6988816s to libmachine.API.Create "multinode-343600"
	I0716 18:46:17.257429    2528 start.go:293] postStartSetup for "multinode-343600" (driver="hyperv")
	I0716 18:46:17.257429    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:46:17.272461    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:46:17.273523    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:19.446999    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:22.079241    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:22.079494    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:22.181998    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9083458s)
	I0716 18:46:22.195131    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:46:22.202831    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:46:22.202996    2528 command_runner.go:130] > ID=buildroot
	I0716 18:46:22.202996    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:46:22.202996    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:46:22.203106    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:46:22.203141    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:46:22.203576    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:46:22.204530    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:46:22.204530    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:46:22.216559    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:46:22.235254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:46:22.284004    2528 start.go:296] duration metric: took 5.0265564s for postStartSetup
	I0716 18:46:22.287647    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:24.439502    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:24.440397    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:24.440508    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:27.008815    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:27.009327    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:27.009475    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:46:27.012789    2528 start.go:128] duration metric: took 2m5.4581778s to createHost
	I0716 18:46:27.012895    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:29.152094    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:29.152167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:31.666866    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:31.676254    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:31.676254    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:31.676254    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180791.800663024
	
	I0716 18:46:31.808569    2528 fix.go:216] guest clock: 1721180791.800663024
	I0716 18:46:31.808569    2528 fix.go:229] Guest: 2024-07-16 18:46:31.800663024 -0700 PDT Remote: 2024-07-16 18:46:27.0127896 -0700 PDT m=+130.920053601 (delta=4.787873424s)
	I0716 18:46:31.808569    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:33.954289    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:33.954504    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:36.486850    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:36.495114    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:46:36.496547    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.170.61 22 <nil> <nil>}
	I0716 18:46:36.496663    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721180791
	I0716 18:46:36.647696    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:46:31 UTC 2024
	
	I0716 18:46:36.647696    2528 fix.go:236] clock set: Wed Jul 17 01:46:31 UTC 2024
	 (err=<nil>)
	I0716 18:46:36.647696    2528 start.go:83] releasing machines lock for "multinode-343600", held for 2m15.0941871s
	I0716 18:46:36.647912    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:38.741215    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:38.741685    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:41.298764    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:41.299002    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:41.303128    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:46:41.303128    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:41.315135    2528 ssh_runner.go:195] Run: cat /version.json
	I0716 18:46:41.315135    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:46:43.467420    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467557    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:43.467651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:46:46.047212    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.047888    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.047955    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.077104    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:46:46.077461    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:46:46.077695    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:46:46.146257    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:46:46.146810    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8436645s)
	W0716 18:46:46.146810    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:46:46.162349    2528 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0716 18:46:46.162349    2528 ssh_runner.go:235] Completed: cat /version.json: (4.8471972s)
	I0716 18:46:46.176435    2528 ssh_runner.go:195] Run: systemctl --version
	I0716 18:46:46.185074    2528 command_runner.go:130] > systemd 252 (252)
	I0716 18:46:46.185166    2528 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0716 18:46:46.197907    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:46:46.206427    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0716 18:46:46.207687    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:46:46.221192    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:46:46.252774    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:46:46.252902    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:46:46.252954    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.253229    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:46:46.278942    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:46:46.278942    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:46:46.292287    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:46:46.305345    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:46:46.341183    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:46:46.360655    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:46:46.372645    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:46:46.404417    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.440777    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:46:46.480666    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:46:46.517269    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:46:46.555661    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:46:46.595134    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:46:46.636030    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:46:46.669748    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:46:46.687925    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:46:46.703692    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:46:46.738539    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:46.942316    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:46:46.974879    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:46:46.988183    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:46:47.012332    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:46:47.012460    2528 command_runner.go:130] > [Unit]
	I0716 18:46:47.012460    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:46:47.012460    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:46:47.012460    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:46:47.012460    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:46:47.012460    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:46:47.012626    2528 command_runner.go:130] > [Service]
	I0716 18:46:47.012626    2528 command_runner.go:130] > Type=notify
	I0716 18:46:47.012728    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:46:47.012728    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:46:47.012728    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:46:47.012806    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:46:47.012806    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:46:47.012923    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:46:47.012992    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:46:47.012992    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:46:47.013069    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:46:47.013069    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:46:47.013069    2528 command_runner.go:130] > ExecStart=
	I0716 18:46:47.013138    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:46:47.013214    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:46:47.013214    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:46:47.013322    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:46:47.013322    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:46:47.013407    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:46:47.013475    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:46:47.013475    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:46:47.013551    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:46:47.013551    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:46:47.013619    2528 command_runner.go:130] > Delegate=yes
	I0716 18:46:47.013619    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:46:47.013619    2528 command_runner.go:130] > KillMode=process
	I0716 18:46:47.013697    2528 command_runner.go:130] > [Install]
	I0716 18:46:47.013697    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:46:47.028178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.066976    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:46:47.117167    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:46:47.162324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.200633    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:46:47.280999    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:46:47.311522    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:46:47.351246    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:46:47.363386    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:46:47.370199    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:46:47.385151    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:46:47.403112    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:46:47.447914    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:46:47.649068    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:46:47.834164    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:46:47.835012    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:46:47.882780    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:48.088516    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:46:50.659348    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.570823s)
	I0716 18:46:50.671326    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0716 18:46:50.704324    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:50.741558    2528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0716 18:46:50.938029    2528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0716 18:46:51.121627    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.306392    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0716 18:46:51.345430    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0716 18:46:51.378469    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:46:51.593700    2528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0716 18:46:51.707062    2528 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0716 18:46:51.721305    2528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0716 18:46:51.731822    2528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0716 18:46:51.731937    2528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0716 18:46:51.731937    2528 command_runner.go:130] > Device: 0,22	Inode: 874         Links: 1
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0716 18:46:51.731937    2528 command_runner.go:130] > Access: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Modify: 2024-07-17 01:46:51.615620136 +0000
	I0716 18:46:51.731937    2528 command_runner.go:130] > Change: 2024-07-17 01:46:51.618619997 +0000
	I0716 18:46:51.732385    2528 command_runner.go:130] >  Birth: -
	I0716 18:46:51.732417    2528 start.go:563] Will wait 60s for crictl version
	I0716 18:46:51.746580    2528 ssh_runner.go:195] Run: which crictl
	I0716 18:46:51.755101    2528 command_runner.go:130] > /usr/bin/crictl
	I0716 18:46:51.769799    2528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0716 18:46:51.824492    2528 command_runner.go:130] > Version:  0.1.0
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeName:  docker
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0716 18:46:51.824492    2528 command_runner.go:130] > RuntimeApiVersion:  v1
	I0716 18:46:51.824590    2528 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0716 18:46:51.835722    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.870713    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.882072    2528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0716 18:46:51.913316    2528 command_runner.go:130] > 27.0.3
	I0716 18:46:51.920390    2528 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0716 18:46:51.920390    2528 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0716 18:46:51.923941    2528 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0716 18:46:51.924950    2528 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:40:9d:99 Flags:up|broadcast|multicast|running}
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: fe80::ce69:7862:e67d:4aae/64
	I0716 18:46:51.927974    2528 ip.go:210] interface addr: 172.27.160.1/20
	I0716 18:46:51.939642    2528 ssh_runner.go:195] Run: grep 172.27.160.1	host.minikube.internal$ /etc/hosts
	I0716 18:46:51.947379    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:46:51.972306    2528 kubeadm.go:883] updating cluster {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0716 18:46:51.972854    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:46:51.983141    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:46:52.003407    2528 docker.go:685] Got preloaded images: 
	I0716 18:46:52.003607    2528 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0716 18:46:52.016232    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:46:52.032577    2528 command_runner.go:139] > {"Repositories":{}}
	I0716 18:46:52.045824    2528 ssh_runner.go:195] Run: which lz4
	I0716 18:46:52.051365    2528 command_runner.go:130] > /usr/bin/lz4
	I0716 18:46:52.051365    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0716 18:46:52.065833    2528 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0716 18:46:52.073461    2528 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.073923    2528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0716 18:46:52.074120    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0716 18:46:53.746678    2528 docker.go:649] duration metric: took 1.6953071s to copy over tarball
	I0716 18:46:53.762926    2528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0716 18:47:02.378190    2528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146516s)
	I0716 18:47:02.378190    2528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0716 18:47:02.443853    2528 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0716 18:47:02.461816    2528 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0716 18:47:02.462758    2528 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0716 18:47:02.509022    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:02.711991    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:47:06.056294    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3442911s)
	I0716 18:47:06.068040    2528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0716 18:47:06.093728    2528 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0716 18:47:06.093728    2528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:06.093728    2528 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0716 18:47:06.093728    2528 cache_images.go:84] Images are preloaded, skipping loading
	I0716 18:47:06.094735    2528 kubeadm.go:934] updating node { 172.27.170.61 8443 v1.30.2 docker true true} ...
	I0716 18:47:06.094735    2528 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-343600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.170.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0716 18:47:06.102728    2528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0716 18:47:06.139756    2528 command_runner.go:130] > cgroupfs
	I0716 18:47:06.140705    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:06.140741    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:06.140741    2528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0716 18:47:06.140741    2528 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.170.61 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-343600 NodeName:multinode-343600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.170.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.170.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0716 18:47:06.140963    2528 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.170.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-343600"
	  kubeletExtraArgs:
	    node-ip: 172.27.170.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.170.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0716 18:47:06.152709    2528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubeadm
	I0716 18:47:06.170640    2528 command_runner.go:130] > kubectl
	I0716 18:47:06.170801    2528 command_runner.go:130] > kubelet
	I0716 18:47:06.170801    2528 binaries.go:44] Found k8s binaries, skipping transfer
	I0716 18:47:06.184230    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0716 18:47:06.200853    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0716 18:47:06.228427    2528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0716 18:47:06.260745    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0716 18:47:06.309644    2528 ssh_runner.go:195] Run: grep 172.27.170.61	control-plane.minikube.internal$ /etc/hosts
	I0716 18:47:06.317183    2528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.170.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0716 18:47:06.351658    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:06.546652    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:06.577151    2528 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600 for IP: 172.27.170.61
	I0716 18:47:06.577151    2528 certs.go:194] generating shared ca certs ...
	I0716 18:47:06.577151    2528 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0716 18:47:06.577515    2528 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0716 18:47:06.578513    2528 certs.go:256] generating profile certs ...
	I0716 18:47:06.578513    2528 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key
	I0716 18:47:06.578513    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt with IP's: []
	I0716 18:47:06.694114    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt ...
	I0716 18:47:06.694114    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.crt: {Name:mkba4b0bb7bd4b8160aa453885bbb83b755029a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.696111    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key ...
	I0716 18:47:06.696111    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\client.key: {Name:mkc96a03b2ccfa5f7d3f6218ab1ea66afc682b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.697124    2528 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff
	I0716 18:47:06.697124    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.170.61]
	I0716 18:47:06.792122    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff ...
	I0716 18:47:06.792122    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff: {Name:mk975e14a95758adfc06f8a7463dd5262943f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.794116    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff ...
	I0716 18:47:06.794116    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff: {Name:mk689785ac465f6ceb90616c7e99ead830d998e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:06.795110    2528 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt
	I0716 18:47:06.808107    2528 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key.b5c7a7ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key
	I0716 18:47:06.809109    2528 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key
	I0716 18:47:06.809109    2528 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt with IP's: []
	I0716 18:47:07.288057    2528 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt: {Name:mk330d4bb796a41ad6b7f8c6db7e071e0537ae41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key ...
	I0716 18:47:07.288057    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key: {Name:mk6e5431effe7ab951d381e9db2293e1f555f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0716 18:47:07.288057    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0716 18:47:07.293327    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0716 18:47:07.293559    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0716 18:47:07.293601    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0716 18:47:07.303030    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0716 18:47:07.311544    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem (1338 bytes)
	W0716 18:47:07.312221    2528 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740_empty.pem, impossibly tiny 0 bytes
	I0716 18:47:07.312354    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0716 18:47:07.313180    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0716 18:47:07.313496    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0716 18:47:07.313795    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0716 18:47:07.314332    2528 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem (1708 bytes)
	I0716 18:47:07.314645    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem -> /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.314895    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /usr/share/ca-certificates/47402.pem
	I0716 18:47:07.315038    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:07.316519    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0716 18:47:07.381340    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0716 18:47:07.442707    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0716 18:47:07.494751    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0716 18:47:07.536056    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0716 18:47:07.587006    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0716 18:47:07.633701    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0716 18:47:07.678881    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0716 18:47:07.726989    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4740.pem --> /usr/share/ca-certificates/4740.pem (1338 bytes)
	I0716 18:47:07.787254    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /usr/share/ca-certificates/47402.pem (1708 bytes)
	I0716 18:47:07.833375    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0716 18:47:07.879363    2528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0716 18:47:07.924777    2528 ssh_runner.go:195] Run: openssl version
	I0716 18:47:07.933228    2528 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0716 18:47:07.947212    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4740.pem && ln -fs /usr/share/ca-certificates/4740.pem /etc/ssl/certs/4740.pem"
	I0716 18:47:07.980824    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:07.988056    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:25 /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.002558    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4740.pem
	I0716 18:47:08.012225    2528 command_runner.go:130] > 51391683
	I0716 18:47:08.026051    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4740.pem /etc/ssl/certs/51391683.0"
	I0716 18:47:08.059591    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/47402.pem && ln -fs /usr/share/ca-certificates/47402.pem /etc/ssl/certs/47402.pem"
	I0716 18:47:08.100058    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108313    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.108844    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:25 /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.121807    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/47402.pem
	I0716 18:47:08.130492    2528 command_runner.go:130] > 3ec20f2e
	I0716 18:47:08.143156    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/47402.pem /etc/ssl/certs/3ec20f2e.0"
	I0716 18:47:08.176979    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0716 18:47:08.209581    2528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.216763    2528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:09 /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.233087    2528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0716 18:47:08.241421    2528 command_runner.go:130] > b5213941
	I0716 18:47:08.254994    2528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0716 18:47:08.290064    2528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0716 18:47:08.296438    2528 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0716 18:47:08.297118    2528 kubeadm.go:392] StartCluster: {Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 18:47:08.307066    2528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0716 18:47:08.345323    2528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0716 18:47:08.362447    2528 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0716 18:47:08.376785    2528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0716 18:47:08.404857    2528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0716 18:47:08.423186    2528 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0716 18:47:08.424329    2528 kubeadm.go:157] found existing configuration files:
	
	I0716 18:47:08.438954    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0716 18:47:08.456213    2528 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.456488    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0716 18:47:08.470157    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0716 18:47:08.502646    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0716 18:47:08.519520    2528 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.520218    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0716 18:47:08.532638    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0716 18:47:08.562821    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.579810    2528 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.580838    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0716 18:47:08.592870    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0716 18:47:08.622715    2528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0716 18:47:08.639394    2528 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.640321    2528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0716 18:47:08.656830    2528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0716 18:47:08.675184    2528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0716 18:47:09.062205    2528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:09.062333    2528 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0716 18:47:22.600142    2528 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600142    2528 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0716 18:47:22.600235    2528 kubeadm.go:310] [preflight] Running pre-flight checks
	I0716 18:47:22.600235    2528 command_runner.go:130] > [preflight] Running pre-flight checks
	I0716 18:47:22.600499    2528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600576    2528 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0716 18:47:22.600892    2528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.600892    2528 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0716 18:47:22.601282    2528 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601282    2528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0716 18:47:22.601424    2528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.601424    2528 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0716 18:47:22.605572    2528 out.go:204]   - Generating certificates and keys ...
	I0716 18:47:22.606120    2528 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0716 18:47:22.606181    2528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0716 18:47:22.606301    2528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606373    2528 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0716 18:47:22.606599    2528 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606708    2528 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0716 18:47:22.606867    2528 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.606867    2528 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0716 18:47:22.607568    2528 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607610    2528 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607749    2528 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607749    2528 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-343600] and IPs [172.27.170.61 127.0.0.1 ::1]
	I0716 18:47:22.607985    2528 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.607985    2528 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0716 18:47:22.608708    2528 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608708    2528 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0716 18:47:22.608979    2528 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0716 18:47:22.608979    2528 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0716 18:47:22.609050    2528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609050    2528 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0716 18:47:22.609209    2528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609209    2528 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0716 18:47:22.609517    2528 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609658    2528 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0716 18:47:22.609800    2528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.609800    2528 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0716 18:47:22.610540    2528 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610540    2528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0716 18:47:22.610755    2528 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.610850    2528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0716 18:47:22.614478    2528 out.go:204]   - Booting up control plane ...
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0716 18:47:22.614701    2528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.614701    2528 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0716 18:47:22.615525    2528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0716 18:47:22.615525    2528 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0716 18:47:22.616536    2528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001842102s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0716 18:47:22.616536    2528 kubeadm.go:310] [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [api-check] The API server is healthy after 7.002216596s
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0716 18:47:22.617555    2528 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0716 18:47:22.617555    2528 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.617555    2528 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0716 18:47:22.618542    2528 command_runner.go:130] > [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 kubeadm.go:310] [mark-control-plane] Marking the node multinode-343600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0716 18:47:22.618542    2528 command_runner.go:130] > [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.618542    2528 kubeadm.go:310] [bootstrap-token] Using token: x0dhgm.evh4v1guiv53l7v4
	I0716 18:47:22.622942    2528 out.go:204]   - Configuring RBAC rules ...
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0716 18:47:22.623956    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.623956    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0716 18:47:22.624957    2528 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0716 18:47:22.624957    2528 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0716 18:47:22.624957    2528 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.624957    2528 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0716 18:47:22.626140    2528 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626224    2528 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0716 18:47:22.626288    2528 kubeadm.go:310] 
	I0716 18:47:22.626288    2528 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626453    2528 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0716 18:47:22.626510    2528 kubeadm.go:310] 
	I0716 18:47:22.626664    2528 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626664    2528 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0716 18:47:22.626718    2528 kubeadm.go:310] 
	I0716 18:47:22.626792    2528 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0716 18:47:22.626846    2528 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0716 18:47:22.627027    2528 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627085    2528 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0716 18:47:22.627354    2528 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0716 18:47:22.627354    2528 kubeadm.go:310] 
	I0716 18:47:22.627354    2528 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627509    2528 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627548    2528 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0716 18:47:22.627548    2528 kubeadm.go:310] 
	I0716 18:47:22.627848    2528 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0716 18:47:22.627848    2528 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0716 18:47:22.628148    2528 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628148    2528 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0716 18:47:22.628390    2528 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0716 18:47:22.628470    2528 kubeadm.go:310] 
	I0716 18:47:22.628777    2528 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0716 18:47:22.628777    2528 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0716 18:47:22.628777    2528 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0716 18:47:22.629197    2528 kubeadm.go:310] 
	I0716 18:47:22.629337    2528 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629337    2528 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 \
	I0716 18:47:22.629494    2528 command_runner.go:130] > 	--control-plane 
	I0716 18:47:22.629494    2528 kubeadm.go:310] 	--control-plane 
	I0716 18:47:22.629742    2528 kubeadm.go:310] 
	I0716 18:47:22.629845    2528 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0716 18:47:22.629845    2528 kubeadm.go:310] 
	I0716 18:47:22.630034    2528 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630034    2528 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0dhgm.evh4v1guiv53l7v4 \
	I0716 18:47:22.630231    2528 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:803f556f649ce0bd987885ba0ea285bfba65c43620f4227441923878d3fe46f1 
	I0716 18:47:22.630231    2528 cni.go:84] Creating CNI manager for ""
	I0716 18:47:22.630231    2528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0716 18:47:22.633183    2528 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0716 18:47:22.650327    2528 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0716 18:47:22.658197    2528 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0716 18:47:22.658197    2528 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0716 18:47:22.658197    2528 command_runner.go:130] > Access: 2024-07-17 01:45:28.095720000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Modify: 2024-07-15 15:50:14.000000000 +0000
	I0716 18:47:22.658197    2528 command_runner.go:130] > Change: 2024-07-16 18:45:19.763000000 +0000
	I0716 18:47:22.658288    2528 command_runner.go:130] >  Birth: -
	I0716 18:47:22.658325    2528 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0716 18:47:22.658325    2528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0716 18:47:22.706052    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0716 18:47:23.286125    2528 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > serviceaccount/kindnet created
	I0716 18:47:23.286241    2528 command_runner.go:130] > daemonset.apps/kindnet created
	I0716 18:47:23.286344    2528 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0716 18:47:23.302726    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.303058    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-343600 minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=multinode-343600 minikube.k8s.io/primary=true
	I0716 18:47:23.319315    2528 command_runner.go:130] > -16
	I0716 18:47:23.319402    2528 ops.go:34] apiserver oom_adj: -16
	I0716 18:47:23.477167    2528 command_runner.go:130] > node/multinode-343600 labeled
	I0716 18:47:23.502850    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0716 18:47:23.514059    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:23.625264    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.029898    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.129926    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:24.517922    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:24.625736    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.018908    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.122741    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:25.520333    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:25.620702    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.020025    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.135097    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:26.523104    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:26.624730    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.029349    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.139131    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:27.531645    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:27.626235    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.030561    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.146556    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:28.517469    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:28.631684    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.022831    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.141623    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:29.526425    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:29.632072    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.024684    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.136573    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:30.526520    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:30.630266    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.032324    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.144283    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:31.531362    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:31.665981    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.024675    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.145177    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:32.530881    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:32.661539    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.022422    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.132375    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:33.527713    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:33.638713    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.028370    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.155221    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:34.518455    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:34.615114    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.016717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.124271    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:35.520717    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:35.659632    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.029061    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.167338    2528 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0716 18:47:36.521003    2528 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0716 18:47:36.652842    2528 command_runner.go:130] > NAME      SECRETS   AGE
	I0716 18:47:36.652842    2528 command_runner.go:130] > default   0         0s
	I0716 18:47:36.656190    2528 kubeadm.go:1113] duration metric: took 13.3697182s to wait for elevateKubeSystemPrivileges
	I0716 18:47:36.656279    2528 kubeadm.go:394] duration metric: took 28.3590584s to StartCluster
	I0716 18:47:36.656407    2528 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.656672    2528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:36.658430    2528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 18:47:36.660515    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0716 18:47:36.660515    2528 start.go:235] Will wait 6m0s for node &{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0716 18:47:36.660634    2528 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0716 18:47:36.660854    2528 addons.go:69] Setting storage-provisioner=true in profile "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:69] Setting default-storageclass=true in profile "multinode-343600"
	I0716 18:47:36.661101    2528 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-343600"
	I0716 18:47:36.660854    2528 addons.go:234] Setting addon storage-provisioner=true in "multinode-343600"
	I0716 18:47:36.661249    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:36.661333    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:47:36.662298    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.662853    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:36.665294    2528 out.go:177] * Verifying Kubernetes components...
	I0716 18:47:36.683056    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:47:37.033996    2528 command_runner.go:130] > apiVersion: v1
	I0716 18:47:37.034073    2528 command_runner.go:130] > data:
	I0716 18:47:37.034073    2528 command_runner.go:130] >   Corefile: |
	I0716 18:47:37.034073    2528 command_runner.go:130] >     .:53 {
	I0716 18:47:37.034141    2528 command_runner.go:130] >         errors
	I0716 18:47:37.034141    2528 command_runner.go:130] >         health {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            lameduck 5s
	I0716 18:47:37.034141    2528 command_runner.go:130] >         }
	I0716 18:47:37.034141    2528 command_runner.go:130] >         ready
	I0716 18:47:37.034141    2528 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0716 18:47:37.034141    2528 command_runner.go:130] >            pods insecure
	I0716 18:47:37.034253    2528 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0716 18:47:37.034328    2528 command_runner.go:130] >            ttl 30
	I0716 18:47:37.034328    2528 command_runner.go:130] >         }
	I0716 18:47:37.034328    2528 command_runner.go:130] >         prometheus :9153
	I0716 18:47:37.034328    2528 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0716 18:47:37.034406    2528 command_runner.go:130] >            max_concurrent 1000
	I0716 18:47:37.034406    2528 command_runner.go:130] >         }
	I0716 18:47:37.034406    2528 command_runner.go:130] >         cache 30
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loop
	I0716 18:47:37.034406    2528 command_runner.go:130] >         reload
	I0716 18:47:37.034406    2528 command_runner.go:130] >         loadbalance
	I0716 18:47:37.034406    2528 command_runner.go:130] >     }
	I0716 18:47:37.034406    2528 command_runner.go:130] > kind: ConfigMap
	I0716 18:47:37.034634    2528 command_runner.go:130] > metadata:
	I0716 18:47:37.034701    2528 command_runner.go:130] >   creationTimestamp: "2024-07-17T01:47:21Z"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   name: coredns
	I0716 18:47:37.034701    2528 command_runner.go:130] >   namespace: kube-system
	I0716 18:47:37.034701    2528 command_runner.go:130] >   resourceVersion: "223"
	I0716 18:47:37.034701    2528 command_runner.go:130] >   uid: 595602c4-5e06-4ddb-9dee-ea397f5fa901
	I0716 18:47:37.036878    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0716 18:47:37.140580    2528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0716 18:47:37.604521    2528 command_runner.go:130] > configmap/coredns replaced
	I0716 18:47:37.604650    2528 start.go:971] {"host.minikube.internal": 172.27.160.1} host record injected into CoreDNS's ConfigMap
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.605758    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:37.606816    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.606902    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:37.608532    2528 cert_rotation.go:137] Starting client certificate rotation controller
	I0716 18:47:37.609032    2528 node_ready.go:35] waiting up to 6m0s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:37.609302    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609302    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609402    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.609222    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.609526    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.609526    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.609683    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.627505    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628000    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Audit-Id: 492a828c-c3c7-4b69-b10b-8943ca03aa40
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.628000    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.628000    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.628935    2528 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0716 18:47:37.628935    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.628935    2528 round_trippers.go:580]     Audit-Id: 9db67fc9-8a63-4d16-886f-176bc5217d2a
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.629025    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.629025    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.629190    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.629695    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:37.630391    2528 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"356","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:37.630492    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:37.630492    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:37.630492    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:37.630492    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:37.648376    2528 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0716 18:47:37.649109    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:37.649109    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:37.649109    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:37 GMT
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Audit-Id: 187b5dbc-dd05-4b56-b446-13e940140dc1
	I0716 18:47:37.649211    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:37.649211    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"358","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.116364    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.116364    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116364    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116364    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.116629    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0716 18:47:38.116743    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.116743    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.116743    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: 0757dbcb-6945-4e67-a093-20e41b407fc5
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Length: 291
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15f9c4ea-6e41-404c-82f8-820f440891f1","resourceVersion":"368","creationTimestamp":"2024-07-17T01:47:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0716 18:47:38.122150    2528 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-343600" context rescaled to 1 replicas
	I0716 18:47:38.122150    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:38.122150    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.122150    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.122150    2528 round_trippers.go:580]     Audit-Id: bbb6a5ef-764e-4077-8d9f-070ebdeb90f1
	I0716 18:47:38.123117    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.611399    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:38.611654    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:38.611654    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:38.611654    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:38.615555    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:38.615555    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Audit-Id: 0a21f6d3-6c65-4ac6-bcea-dc7024816704
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:38.615555    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:38.615555    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:38.615716    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:38 GMT
	I0716 18:47:38.616126    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:38.992628    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:38.993936    2528 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 18:47:38.994583    2528 kapi.go:59] client config for multinode-343600: &rest.Config{Host:"https://172.27.170.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-343600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19a55a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0716 18:47:38.995449    2528 addons.go:234] Setting addon default-storageclass=true in "multinode-343600"
	I0716 18:47:38.995541    2528 host.go:66] Checking if "multinode-343600" exists ...
	I0716 18:47:38.995972    2528 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0716 18:47:38.996840    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.000255    2528 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:39.000255    2528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0716 18:47:39.000255    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:39.118577    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.118801    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.119084    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.119154    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.123787    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:39.124674    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.124674    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Audit-Id: 60fe7a35-c0ab-4776-8ac4-0fb9f742bba7
	I0716 18:47:39.124674    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.125109    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.623973    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:39.624291    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:39.624291    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:39.624291    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:39.635851    2528 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0716 18:47:39.636699    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:39.636699    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:39 GMT
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Audit-Id: d34b7081-baa2-4b69-a50d-acae0701bf07
	I0716 18:47:39.636784    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:39.636819    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:39.636819    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:39.637256    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:39.637973    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:40.116698    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.116698    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.117012    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.117012    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.124779    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:40.124779    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.124779    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.124779    2528 round_trippers.go:580]     Audit-Id: e7d37931-19c7-48bb-a56c-167e2f8eef91
	I0716 18:47:40.124779    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:40.611715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:40.611808    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:40.611808    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:40.611808    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:40.615270    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:40.615270    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:40 GMT
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Audit-Id: 424b964d-49be-44f4-9642-7dc9b3041492
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:40.615270    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:40.615270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:40.615270    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.119095    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.119095    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.119391    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.119391    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.123315    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:41.123436    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.123436    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Audit-Id: 37b8523c-c31b-4c9a-9063-e3a7dcacc50c
	I0716 18:47:41.123436    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.124012    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.351167    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:41.472369    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:41.472726    2528 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:41.472726    2528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0716 18:47:41.472841    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600 ).state
	I0716 18:47:41.611248    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:41.611328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:41.611328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:41.611328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:41.622271    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:41.622271    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:41.622271    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:41 GMT
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Audit-Id: feb9d271-d3b3-4f9a-82b3-9f5b1a685276
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:41.622271    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:41.623281    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:41.624703    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.122015    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.122094    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.122094    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.122094    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.182290    2528 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0716 18:47:42.183214    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Audit-Id: d38046fe-098c-4114-aa63-b5ca2d87d465
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.183214    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.183214    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.183603    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:42.184083    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:42.615709    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:42.615709    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:42.616062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:42.616062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:42.619012    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:42.619012    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:42 GMT
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Audit-Id: cbb5c5f9-584a-4783-bb75-8e367b47e810
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:42.619759    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:42.619759    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:42.620426    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.110491    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.110491    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.110491    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.110491    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.114140    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:43.114140    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.114140    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Audit-Id: 00c98b31-30b6-473f-8475-869ad65d5165
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.114140    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.115192    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.618187    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:43.618397    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:43.618397    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:43.618397    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:43.622712    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:43.622712    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:43.622712    2528 round_trippers.go:580]     Audit-Id: d76ec6fc-10f4-46d8-be93-188cc9441f8b
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:43.622804    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:43.622804    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:43 GMT
	I0716 18:47:43.623169    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:43.748057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600 ).networkadapters[0]).ipaddresses[0]
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:44.050821    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:44.050821    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:44.110262    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.110262    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.110262    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.110262    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.114821    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:44.115023    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Audit-Id: 039e3a58-af25-4607-926d-e2294e1b24c7
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.115023    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.115023    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.115402    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.200180    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0716 18:47:44.617715    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:44.617791    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:44.617791    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:44.617791    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:44.621278    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:44.621278    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:44.621278    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:44.621278    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:44 GMT
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Audit-Id: bc71c70f-fc4a-4ece-9026-bf6c9a4e7247
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:44.622090    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:44.622310    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:44.622754    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:44.699027    2528 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0716 18:47:44.699027    2528 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0716 18:47:44.699158    2528 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0716 18:47:44.699158    2528 command_runner.go:130] > pod/storage-provisioner created
	I0716 18:47:45.123961    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.123961    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.124239    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.124239    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.128561    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:45.128561    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.128561    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.129270    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.129270    2528 round_trippers.go:580]     Audit-Id: 9710fb59-615c-48da-96f6-ab77d8716e6f
	I0716 18:47:45.129353    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.129903    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:45.619852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:45.619948    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:45.619948    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:45.620114    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:45.627244    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:45.627244    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:45.627244    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:45 GMT
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Audit-Id: 68cf0e3b-8724-4d9e-b31f-bd263330372e
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:45.627244    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:45.628707    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.132055    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.132055    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.132055    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.132055    2528 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0716 18:47:46.132055    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Audit-Id: 0af1b4ef-fab5-453f-916b-213f7084f274
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.132055    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.132055    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.132055    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stdout =====>] : 172.27.170.61
	
	I0716 18:47:46.224512    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:47:46.224760    2528 sshutil.go:53] new ssh client: &{IP:172.27.170.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600\id_rsa Username:docker}
	I0716 18:47:46.363994    2528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0716 18:47:46.513586    2528 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0716 18:47:46.514083    2528 round_trippers.go:463] GET https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses
	I0716 18:47:46.514083    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.514192    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.514192    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.518318    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:46.518368    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.518405    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Content-Length: 1273
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.518405    2528 round_trippers.go:580]     Audit-Id: 4184bfcc-b4cd-487e-b780-705d387f8465
	I0716 18:47:46.518405    2528 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0716 18:47:46.519105    2528 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.519250    2528 round_trippers.go:463] PUT https://172.27.170.61:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0716 18:47:46.519250    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.519250    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.519304    2528 round_trippers.go:473]     Content-Type: application/json
	I0716 18:47:46.519304    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.533676    2528 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0716 18:47:46.533676    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Length: 1220
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Audit-Id: 0b9f61f1-3924-499d-ab03-4dfb654750ce
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.533676    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.533676    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.534008    2528 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c144fa08-f15e-48e1-aad3-91e66269ec8b","resourceVersion":"396","creationTimestamp":"2024-07-17T01:47:46Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T01:47:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0716 18:47:46.537654    2528 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0716 18:47:46.541504    2528 addons.go:510] duration metric: took 9.880953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0716 18:47:46.612750    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:46.612750    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:46.612750    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:46.612750    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:46.616643    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:46.616643    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Audit-Id: 2ea1e885-5ef5-465a-8eb6-caae80af0fbf
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:46.616643    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:46.616643    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:46.616849    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:46.616849    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:46 GMT
	I0716 18:47:46.617172    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.111509    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.111812    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.111812    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.111812    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.115189    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.115189    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.115189    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Audit-Id: 3e6057a4-6886-4e21-bdcb-c2dc7f616878
	I0716 18:47:47.115189    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.115514    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.115514    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.115955    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:47.116655    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:47.611771    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:47.611771    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:47.611771    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:47.611771    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:47.615409    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:47.615409    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:47 GMT
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Audit-Id: d951bf54-c488-44ba-b705-400a360d3009
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:47.615409    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:47.616162    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:47.616493    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.110862    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.111155    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.111155    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.111155    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.114746    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:48.114746    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Audit-Id: cf08d771-64b5-4a1c-9159-dd1af693d856
	I0716 18:47:48.114746    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.115672    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.115672    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.116023    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:48.614223    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:48.614328    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:48.614328    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:48.614328    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:48.616901    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:48.616901    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:48.616901    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:48 GMT
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Audit-Id: c9d5ae4c-3bb4-4f28-a759-2ae0b507e5c7
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:48.617838    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:48.617838    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:48.618698    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.110452    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.110452    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.110452    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.110452    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.114108    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:49.114170    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.114170    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.114170    2528 round_trippers.go:580]     Audit-Id: 460c5aad-82ae-4394-b6e7-c874b7c24b30
	I0716 18:47:49.114170    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.612745    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:49.613152    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:49.613152    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:49.613152    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:49.618720    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:49.618720    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:49.618720    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:49 GMT
	I0716 18:47:49.618720    2528 round_trippers.go:580]     Audit-Id: e8e98659-8931-443a-88d1-e197da3ba6f8
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:49.619592    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:49.619776    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:49.619974    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:50.121996    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.122086    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.122086    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.122086    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.125664    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.125664    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.125664    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Audit-Id: 87c94379-f7da-4cd8-9b5a-dbbe4f2efeab
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.126605    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.126605    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.126944    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:50.620146    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:50.620146    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:50.620146    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:50.620146    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:50.623799    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:50.623799    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:50.623799    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:50.623799    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:50 GMT
	I0716 18:47:50.624495    2528 round_trippers.go:580]     Audit-Id: d02402c0-2bd8-4f77-a05a-4fef59c96251
	I0716 18:47:50.624730    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.116780    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.116780    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.116902    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.116902    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.119946    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:51.119946    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.121062    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.121062    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Audit-Id: 5f38b95e-7bda-4eaf-9d1b-218fc37e4c50
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.121101    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.121101    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.121801    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.616888    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:51.616888    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:51.617197    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:51.617197    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:51.621783    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:51.622508    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:51 GMT
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Audit-Id: aa4742aa-9a16-4750-a1c4-74d14a791c2b
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:51.622508    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:51.622508    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:51.622896    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:51.623411    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:52.114062    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.114062    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.114062    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.114062    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.117648    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:52.117648    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Audit-Id: da9aa85f-7bc5-4b3f-807e-2a5e331efedd
	I0716 18:47:52.117648    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.118762    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.118762    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.118802    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.119005    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:52.615682    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:52.615742    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:52.615742    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:52.615742    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:52.620334    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:52.620334    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Audit-Id: fd2b756a-0ac6-4cc2-8708-a28deffe3b6e
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:52.620334    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:52.620334    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:52 GMT
	I0716 18:47:52.620870    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"327","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0716 18:47:53.115901    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.116089    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.116089    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.116089    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.119600    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:53.119600    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.119600    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.119600    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Audit-Id: 92cf5cb7-9761-43f8-ae51-83d098119b95
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.119673    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.119673    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.120481    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:53.614421    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:53.614421    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:53.614635    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:53.614635    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:53.619116    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:53.619116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:53 GMT
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Audit-Id: 95a4052a-29bb-405a-b73c-609276132f93
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:53.619193    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:53.619193    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:53.619534    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.113342    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.113342    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.113342    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.113342    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.117055    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.117273    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.117273    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Audit-Id: 1457e297-343d-4281-b109-51d7c1b7a548
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.117273    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.117446    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:54.117988    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:54.614852    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:54.614852    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:54.614852    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:54.614852    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:54.618678    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:54.618678    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Audit-Id: 31070e7f-9d08-4f23-bb7e-1a2c68818ffd
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:54.619286    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:54.619286    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:54 GMT
	I0716 18:47:54.619679    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.118360    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.118360    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.118360    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.118506    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.126193    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:55.126745    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Audit-Id: 71e44c3a-2fc0-4417-94f7-477981e3a04c
	I0716 18:47:55.126745    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.126827    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.126827    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.126869    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:55.615806    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:55.615806    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:55.615806    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:55.615806    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:55.620455    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:55.620519    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:55.620519    2528 round_trippers.go:580]     Audit-Id: e8b9f563-a537-4e74-a3ea-77f1f0b6fb6f
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:55.620660    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:55.620660    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:55 GMT
	I0716 18:47:55.620660    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.114910    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.114910    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.114910    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.114910    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.119363    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:56.119504    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.119504    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.119504    2528 round_trippers.go:580]     Audit-Id: d0ac9859-c922-4a24-9d62-81df46a77cb3
	I0716 18:47:56.119788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:56.120353    2528 node_ready.go:53] node "multinode-343600" has status "Ready":"False"
	I0716 18:47:56.613697    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:56.614033    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:56.614033    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:56.614033    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:56.617102    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:56.617102    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:56.617102    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:56 GMT
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Audit-Id: 992de97d-254b-429b-8f5c-09959dc88e6c
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:56.617839    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:56.618241    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"400","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0716 18:47:57.116651    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.116916    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.116916    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.116916    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.124127    2528 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0716 18:47:57.124184    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Audit-Id: bcb3aaf4-64cb-495f-82ab-70f2e04b36ae
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.124184    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.124184    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.124264    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.124417    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.124652    2528 node_ready.go:49] node "multinode-343600" has status "Ready":"True"
	I0716 18:47:57.124652    2528 node_ready.go:38] duration metric: took 19.5154549s for node "multinode-343600" to be "Ready" ...
	I0716 18:47:57.124652    2528 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:57.125186    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:57.125186    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.125241    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.125241    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.133433    2528 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0716 18:47:57.133433    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Audit-Id: e60e7267-6477-4645-881f-115ecc10f4bb
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.133433    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.133433    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.135418    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0716 18:47:57.141423    2528 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:57.141423    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.142416    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.142416    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.142416    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.145432    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:57.146296    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.146296    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.146296    2528 round_trippers.go:580]     Audit-Id: 4b7e84f7-5a58-4a98-8b25-ea2f541617ef
	I0716 18:47:57.146415    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.146583    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.146646    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.146646    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.146646    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.146646    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.153663    2528 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0716 18:47:57.153663    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.153663    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Audit-Id: 19843a14-a85e-498f-834c-5d4a1c1aa37a
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.153663    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.157575    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:57.655028    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:57.655028    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.655129    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.655129    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.665608    2528 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0716 18:47:57.665608    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.665608    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.665686    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Audit-Id: ef794d27-d7ad-4c1b-9f26-80a9612b7353
	I0716 18:47:57.665686    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.665971    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:57.666975    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:57.666975    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:57.666975    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:57.666975    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:57.672436    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:47:57.673468    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:57.673468    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:57 GMT
	I0716 18:47:57.673468    2528 round_trippers.go:580]     Audit-Id: fa4f9791-ab9b-44a2-a02d-225faa48ddd9
	I0716 18:47:57.673624    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:57.674353    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.148196    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.148483    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.148483    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.148483    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.152116    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.152116    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Audit-Id: 905cdc05-1adc-4bda-bb34-d2b93e716f7b
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.152575    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.152575    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.152851    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.153648    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.153715    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.153715    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.153715    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.157121    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.157121    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.157121    2528 round_trippers.go:580]     Audit-Id: 943dfa47-cb98-43d7-97f2-36e092278748
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.157389    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.157389    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.157788    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:58.650707    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:58.650707    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.650796    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.650796    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.655030    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:58.655383    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Audit-Id: 5c8df901-f0d1-4a1b-9232-bf839cdc4b7c
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.655383    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.655383    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.655616    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"409","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0716 18:47:58.656602    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:58.656602    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:58.656602    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:58.656706    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:58.660051    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:58.660225    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Audit-Id: 68f4d8fa-0bab-4c5d-bc69-fe03223feeb5
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:58.660225    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:58.660225    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:58 GMT
	I0716 18:47:58.660611    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.154800    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mmfw4
	I0716 18:47:59.154903    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.154903    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.154903    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.158974    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.158974    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.158974    2528 round_trippers.go:580]     Audit-Id: e512771c-0f4c-4658-803b-fe30523b67c9
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.159298    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.159298    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.159612    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0716 18:47:59.160576    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.160576    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.160649    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.160649    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.162374    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.162374    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Audit-Id: 7fe446ed-4158-4424-94b6-fddc5bd3e58b
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.162374    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.162374    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.163307    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.163680    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.164139    2528 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.164139    2528 pod_ready.go:81] duration metric: took 2.0227095s for pod "coredns-7db6d8ff4d-mmfw4" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164235    2528 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.164361    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-343600
	I0716 18:47:59.164361    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.164420    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.164420    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.166742    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.166742    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Audit-Id: 151c57d8-ae0f-40c4-9de8-50c04473604a
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.166742    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.166742    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.167475    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-343600","namespace":"kube-system","uid":"bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112","resourceVersion":"379","creationTimestamp":"2024-07-17T01:47:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.170.61:2379","kubernetes.io/config.hash":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.mirror":"483dfcb9e2f3704132c965ae08ccf97e","kubernetes.io/config.seen":"2024-07-17T01:47:14.003970410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0716 18:47:59.168221    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.168284    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.168284    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.168284    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.171619    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.171619    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Audit-Id: c5108ac0-8f26-4ca2-b650-8aa4794f7c0e
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.171619    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.171619    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.172297    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.172297    2528 pod_ready.go:92] pod "etcd-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.172297    2528 pod_ready.go:81] duration metric: took 8.0621ms for pod "etcd-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.172297    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-343600
	I0716 18:47:59.172297    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.172297    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.172297    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.175420    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.175420    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Audit-Id: 1d015233-2c1f-4768-8da3-ebe57658664f
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.175420    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.175420    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.175711    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.175906    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-343600","namespace":"kube-system","uid":"9148a015-dfa6-4650-8b8c-74278c687979","resourceVersion":"380","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.170.61:8443","kubernetes.io/config.hash":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.mirror":"e472c24a8bcc7cb1ba26f58d51cc4826","kubernetes.io/config.seen":"2024-07-17T01:47:22.020569070Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0716 18:47:59.176153    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.176153    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.176153    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.176153    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.179736    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.179736    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Audit-Id: 8950480d-384c-49df-9153-382ab4a3727b
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.179736    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.179736    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.180143    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.180538    2528 pod_ready.go:92] pod "kube-apiserver-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.180741    2528 pod_ready.go:81] duration metric: took 8.4434ms for pod "kube-apiserver-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180766    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.180853    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-343600
	I0716 18:47:59.180853    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.180853    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.180853    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.184151    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.184151    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Audit-Id: bc757a5d-bc0a-47f5-b86c-cc2d6d91d310
	I0716 18:47:59.184151    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.184906    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.184906    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.185330    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-343600","namespace":"kube-system","uid":"edf27e5f-149c-476f-bec4-5af7dac112e1","resourceVersion":"382","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.mirror":"4bb42063784bc71056beed65195cd83f","kubernetes.io/config.seen":"2024-07-17T01:47:22.020570470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0716 18:47:59.185609    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.185609    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.185609    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.185609    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.188621    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.188621    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.188621    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.188621    2528 round_trippers.go:580]     Audit-Id: 7dd4db61-c2e6-4f84-a96b-fe12de2716a8
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.188795    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.189267    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.189824    2528 pod_ready.go:92] pod "kube-controller-manager-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.189824    2528 pod_ready.go:81] duration metric: took 9.0585ms for pod "kube-controller-manager-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.189824    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rzpvp
	I0716 18:47:59.189824    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.189824    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.189824    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.191969    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.191969    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Audit-Id: ab541ba2-b7c2-4cb8-b746-caa81ef8028e
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.191969    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.192988    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.193010    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.193265    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rzpvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea6197b-5157-401b-a1bd-e99e8b509f27","resourceVersion":"373","creationTimestamp":"2024-07-17T01:47:36Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06ff1de2-f49f-4d0f-95fb-467783ba79ef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06ff1de2-f49f-4d0f-95fb-467783ba79ef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0716 18:47:59.194213    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.194213    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.194213    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.194213    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.196812    2528 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0716 18:47:59.197019    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Audit-Id: b6fe5052-b479-4e38-8e76-7c4f6815f360
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.197019    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.197019    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.197454    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.197736    2528 pod_ready.go:92] pod "kube-proxy-rzpvp" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.197736    2528 pod_ready.go:81] duration metric: took 7.9113ms for pod "kube-proxy-rzpvp" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.197736    2528 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.358948    2528 request.go:629] Waited for 161.0019ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-343600
	I0716 18:47:59.359051    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.359051    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.359051    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.363239    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.363305    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Audit-Id: ea717242-9ed4-4c8a-b79c-81db438b439e
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.363305    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.363305    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.363305    2528 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-343600","namespace":"kube-system","uid":"4eecc30a-e942-4896-8847-e78138a7f1df","resourceVersion":"381","creationTimestamp":"2024-07-17T01:47:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.mirror":"8418f8daebd23f1f20698e07c205e4a9","kubernetes.io/config.seen":"2024-07-17T01:47:22.020571570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0716 18:47:59.560410    2528 request.go:629] Waited for 196.2858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes/multinode-343600
	I0716 18:47:59.560673    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.560673    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.560768    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.564358    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.564358    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.564921    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Audit-Id: 7c073308-55ec-4d4c-bc5a-af6974edac5c
	I0716 18:47:59.564921    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.565125    2528 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T01:47:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0716 18:47:59.565760    2528 pod_ready.go:92] pod "kube-scheduler-multinode-343600" in "kube-system" namespace has status "Ready":"True"
	I0716 18:47:59.565760    2528 pod_ready.go:81] duration metric: took 368.0229ms for pod "kube-scheduler-multinode-343600" in "kube-system" namespace to be "Ready" ...
	I0716 18:47:59.565760    2528 pod_ready.go:38] duration metric: took 2.4410992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0716 18:47:59.565760    2528 api_server.go:52] waiting for apiserver process to appear ...
	I0716 18:47:59.579270    2528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0716 18:47:59.611168    2528 command_runner.go:130] > 2219
	I0716 18:47:59.611786    2528 api_server.go:72] duration metric: took 22.9509403s to wait for apiserver process to appear ...
	I0716 18:47:59.611874    2528 api_server.go:88] waiting for apiserver healthz status ...
	I0716 18:47:59.611937    2528 api_server.go:253] Checking apiserver healthz at https://172.27.170.61:8443/healthz ...
	I0716 18:47:59.619353    2528 api_server.go:279] https://172.27.170.61:8443/healthz returned 200:
	ok
	I0716 18:47:59.619353    2528 round_trippers.go:463] GET https://172.27.170.61:8443/version
	I0716 18:47:59.619353    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.620339    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.620339    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.621343    2528 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0716 18:47:59.621343    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Length: 263
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Audit-Id: 8fb94b21-bdf3-435a-8f28-10895141455f
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.621343    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.621343    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.621343    2528 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0716 18:47:59.621343    2528 api_server.go:141] control plane version: v1.30.2
	I0716 18:47:59.621343    2528 api_server.go:131] duration metric: took 9.4685ms to wait for apiserver health ...
	I0716 18:47:59.621343    2528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0716 18:47:59.760491    2528 request.go:629] Waited for 139.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:47:59.760596    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.760673    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.760701    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.765283    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:47:59.765283    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Audit-Id: 00225eee-2715-4c1f-9513-d32741dab68d
	I0716 18:47:59.765283    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.765524    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.765524    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.767690    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:47:59.770779    2528 system_pods.go:59] 8 kube-system pods found
	I0716 18:47:59.770850    2528 system_pods.go:61] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:47:59.770850    2528 system_pods.go:61] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:47:59.770940    2528 system_pods.go:61] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:47:59.770940    2528 system_pods.go:74] duration metric: took 149.5965ms to wait for pod list to return data ...
	I0716 18:47:59.770940    2528 default_sa.go:34] waiting for default service account to be created ...
	I0716 18:47:59.963652    2528 request.go:629] Waited for 192.4214ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/default/serviceaccounts
	I0716 18:47:59.964001    2528 round_trippers.go:469] Request Headers:
	I0716 18:47:59.964001    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:47:59.964001    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:47:59.967792    2528 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0716 18:47:59.967792    2528 round_trippers.go:577] Response Headers:
	I0716 18:47:59.967792    2528 round_trippers.go:580]     Content-Length: 261
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:47:59 GMT
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Audit-Id: ca0db25e-b42c-4e53-b910-e902963ea811
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:47:59.968534    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:47:59.968534    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:47:59.968534    2528 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6a0024e-29a5-4b63-b334-88de09233121","resourceVersion":"312","creationTimestamp":"2024-07-17T01:47:36Z"}}]}
	I0716 18:47:59.969015    2528 default_sa.go:45] found service account: "default"
	I0716 18:47:59.969015    2528 default_sa.go:55] duration metric: took 198.0751ms for default service account to be created ...
	I0716 18:47:59.969015    2528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0716 18:48:00.166892    2528 request.go:629] Waited for 197.6224ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/namespaces/kube-system/pods
	I0716 18:48:00.166892    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.166892    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.166892    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.172737    2528 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0716 18:48:00.172737    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.172737    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.172737    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Audit-Id: 45d3de16-90b2-49ce-99a8-79bb627f6765
	I0716 18:48:00.173112    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.175420    2528 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mmfw4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a250328b-d9b2-4190-bf67-f997fd8bf662","resourceVersion":"422","creationTimestamp":"2024-07-17T01:47:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"470efffb-db08-4a0b-bfd5-9b3a3d248fc1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T01:47:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"470efffb-db08-4a0b-bfd5-9b3a3d248fc1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0716 18:48:00.178579    2528 system_pods.go:86] 8 kube-system pods found
	I0716 18:48:00.178644    2528 system_pods.go:89] "coredns-7db6d8ff4d-mmfw4" [a250328b-d9b2-4190-bf67-f997fd8bf662] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "etcd-multinode-343600" [bc5d8cf3-b4fb-4f85-b110-d5cabd7d5112] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kindnet-wlznl" [051ed52f-46bf-42ec-a556-312724d37f57] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-apiserver-multinode-343600" [9148a015-dfa6-4650-8b8c-74278c687979] Running
	I0716 18:48:00.178644    2528 system_pods.go:89] "kube-controller-manager-multinode-343600" [edf27e5f-149c-476f-bec4-5af7dac112e1] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-proxy-rzpvp" [4ea6197b-5157-401b-a1bd-e99e8b509f27] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "kube-scheduler-multinode-343600" [4eecc30a-e942-4896-8847-e78138a7f1df] Running
	I0716 18:48:00.178724    2528 system_pods.go:89] "storage-provisioner" [428f3e80-d110-4808-9bb9-324bd0614d74] Running
	I0716 18:48:00.178724    2528 system_pods.go:126] duration metric: took 209.708ms to wait for k8s-apps to be running ...
	I0716 18:48:00.178724    2528 system_svc.go:44] waiting for kubelet service to be running ....
	I0716 18:48:00.191178    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0716 18:48:00.219131    2528 system_svc.go:56] duration metric: took 40.4071ms WaitForService to wait for kubelet
	I0716 18:48:00.220171    2528 kubeadm.go:582] duration metric: took 23.5582836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0716 18:48:00.220171    2528 node_conditions.go:102] verifying NodePressure condition ...
	I0716 18:48:00.369476    2528 request.go:629] Waited for 149.2417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:463] GET https://172.27.170.61:8443/api/v1/nodes
	I0716 18:48:00.369476    2528 round_trippers.go:469] Request Headers:
	I0716 18:48:00.369476    2528 round_trippers.go:473]     Accept: application/json, */*
	I0716 18:48:00.369476    2528 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0716 18:48:00.373730    2528 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0716 18:48:00.373730    2528 round_trippers.go:577] Response Headers:
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Date: Wed, 17 Jul 2024 01:48:00 GMT
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Audit-Id: 60d87b7b-7d4d-4ca2-b2e8-87af3307f9ed
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Cache-Control: no-cache, private
	I0716 18:48:00.373730    2528 round_trippers.go:580]     Content-Type: application/json
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d69248-dc15-43c6-bfa2-66453fd0b258
	I0716 18:48:00.373730    2528 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: de4c0888-9007-4b55-82e2-b6733ca6f561
	I0716 18:48:00.374755    2528 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-343600","uid":"b7700291-9803-4aea-af61-6e7e779916e2","resourceVersion":"403","creationTimestamp":"2024-07-17T01:47:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-343600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6910ff1293b7338a320c1c51aaf2fcee1cf8a91","minikube.k8s.io/name":"multinode-343600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_16T18_47_23_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0716 18:48:00.374755    2528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0716 18:48:00.374755    2528 node_conditions.go:123] node cpu capacity is 2
	I0716 18:48:00.374755    2528 node_conditions.go:105] duration metric: took 154.5833ms to run NodePressure ...
	I0716 18:48:00.374755    2528 start.go:241] waiting for startup goroutines ...
	I0716 18:48:00.374755    2528 start.go:246] waiting for cluster config update ...
	I0716 18:48:00.374755    2528 start.go:255] writing updated cluster config ...
	I0716 18:48:00.380904    2528 out.go:177] 
	I0716 18:48:00.384131    2528 config.go:182] Loaded profile config "ha-339000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.391131    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:48:00.392164    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.397528    2528 out.go:177] * Starting "multinode-343600-m02" worker node in "multinode-343600" cluster
	I0716 18:48:00.400921    2528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 18:48:00.401944    2528 cache.go:56] Caching tarball of preloaded images
	I0716 18:48:00.402360    2528 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0716 18:48:00.402585    2528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0716 18:48:00.402693    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:48:00.406814    2528 start.go:360] acquireMachinesLock for multinode-343600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0716 18:48:00.407161    2528 start.go:364] duration metric: took 346.8µs to acquireMachinesLock for "multinode-343600-m02"
	I0716 18:48:00.407399    2528 start.go:93] Provisioning new machine with config: &{Name:multinode-343600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-343600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.170.61 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0716 18:48:00.407492    2528 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0716 18:48:00.411365    2528 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0716 18:48:00.411365    2528 start.go:159] libmachine.API.Create for "multinode-343600" (driver="hyperv")
	I0716 18:48:00.411365    2528 client.go:168] LocalClient.Create starting
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0716 18:48:00.411365    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412339    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.412543    2528 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Decoding PEM data...
	I0716 18:48:00.412778    2528 main.go:141] libmachine: Parsing certificate...
	I0716 18:48:00.413031    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0716 18:48:02.307377    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:02.307838    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stdout =====>] : False
	
	I0716 18:48:04.037006    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:04.037392    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:05.520462    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:05.521074    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:09.133613    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:09.134322    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:09.136555    2528 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0716 18:48:09.600292    2528 main.go:141] libmachine: Creating SSH key...
	I0716 18:48:09.724774    2528 main.go:141] libmachine: Creating VM...
	I0716 18:48:09.725774    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0716 18:48:12.715862    2528 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0716 18:48:12.716084    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:12.716084    2528 main.go:141] libmachine: Using switch "Default Switch"
	I0716 18:48:12.716224    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stdout =====>] : True
	
	I0716 18:48:14.492687    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:14.493032    2528 main.go:141] libmachine: Creating VHD
	I0716 18:48:14.493032    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 35E17E97-8EA5-42A5-A1C0-A4D62C9F1A5D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0716 18:48:18.340352    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:18.340352    2528 main.go:141] libmachine: Writing magic tar header
	I0716 18:48:18.341149    2528 main.go:141] libmachine: Writing SSH key tar header
	I0716 18:48:18.354544    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0716 18:48:21.641786    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:21.642494    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:21.642575    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd' -SizeBytes 20000MB
	I0716 18:48:24.762649    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:24.763000    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:24.763094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0716 18:48:28.501080    2528 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-343600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:28.501350    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-343600-m02 -DynamicMemoryEnabled $false
	I0716 18:48:30.819389    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:30.820375    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:30.820495    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-343600-m02 -Count 2
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:33.099636    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:33.099856    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\boot2docker.iso'
	I0716 18:48:35.785504    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:35.786185    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:35.786265    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-343600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\disk.vhd'
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:38.524264    2528 main.go:141] libmachine: Starting VM...
	I0716 18:48:38.525362    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-343600-m02
	I0716 18:48:42.196095    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:42.196207    2528 main.go:141] libmachine: Waiting for host to start...
	I0716 18:48:42.196207    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:44.555136    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:44.555572    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:47.169875    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:48.184959    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:50.433141    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:50.433867    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:50.434057    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:53.016694    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:54.017567    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:48:56.261070    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:56.261562    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:48:58.784532    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:48:59.786634    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:02.025012    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:02.025816    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stdout =====>] : 
	I0716 18:49:04.581164    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:05.587121    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:07.855481    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:07.856398    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:10.566086    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:10.566785    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:12.767457    2528 machine.go:94] provisionDockerMachine start ...
	I0716 18:49:12.767457    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:14.922371    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:14.922651    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:17.469827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:17.480921    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:17.492335    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:17.492335    2528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0716 18:49:17.626877    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0716 18:49:17.626877    2528 buildroot.go:166] provisioning hostname "multinode-343600-m02"
	I0716 18:49:17.626877    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:19.854069    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:19.854153    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:22.473547    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:22.473853    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:22.480226    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:22.480995    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:22.480995    2528 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-343600-m02 && echo "multinode-343600-m02" | sudo tee /etc/hostname
	I0716 18:49:22.636598    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-343600-m02
	
	I0716 18:49:22.636666    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:24.785576    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:24.786271    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:27.348703    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:27.356104    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:27.356639    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:27.356801    2528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-343600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-343600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-343600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0716 18:49:27.509602    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0716 18:49:27.509602    2528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0716 18:49:27.509602    2528 buildroot.go:174] setting up certificates
	I0716 18:49:27.509602    2528 provision.go:84] configureAuth start
	I0716 18:49:27.509602    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:29.640736    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:29.641238    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:32.201912    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:32.202707    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:34.368496    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:36.916034    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:36.916034    2528 provision.go:143] copyHostCerts
	I0716 18:49:36.916274    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0716 18:49:36.916498    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0716 18:49:36.916614    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0716 18:49:36.916998    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0716 18:49:36.918347    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0716 18:49:36.918554    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0716 18:49:36.918660    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0716 18:49:36.918916    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0716 18:49:36.920073    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0716 18:49:36.920408    2528 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0716 18:49:36.920408    2528 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0716 18:49:36.920780    2528 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0716 18:49:36.922143    2528 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-343600-m02 san=[127.0.0.1 172.27.171.221 localhost minikube multinode-343600-m02]
	I0716 18:49:37.019606    2528 provision.go:177] copyRemoteCerts
	I0716 18:49:37.033920    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0716 18:49:37.033920    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:39.197624    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:41.830713    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:41.831929    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:49:41.934007    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9000693s)
	I0716 18:49:41.934007    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0716 18:49:41.934007    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0716 18:49:41.984009    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0716 18:49:41.984576    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0716 18:49:42.032036    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0716 18:49:42.032036    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0716 18:49:42.082983    2528 provision.go:87] duration metric: took 14.5733288s to configureAuth
	I0716 18:49:42.083096    2528 buildroot.go:189] setting minikube options for container-runtime
	I0716 18:49:42.083844    2528 config.go:182] Loaded profile config "multinode-343600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 18:49:42.083938    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:44.259658    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:46.810162    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:46.816270    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:46.816424    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:46.816424    2528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0716 18:49:46.959094    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0716 18:49:46.959094    2528 buildroot.go:70] root file system type: tmpfs
	I0716 18:49:46.959094    2528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0716 18:49:46.959094    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:49.139827    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:51.724905    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:51.730614    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:51.731349    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:51.731349    2528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.170.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0716 18:49:51.900591    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.170.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0716 18:49:51.900659    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:49:54.046075    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:49:54.046323    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:54.046437    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:49:56.575837    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:49:56.575893    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:49:56.582273    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:49:56.582996    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:49:56.582996    2528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0716 18:49:58.866917    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0716 18:49:58.866917    2528 machine.go:97] duration metric: took 46.0992943s to provisionDockerMachine
	I0716 18:49:58.866917    2528 client.go:171] duration metric: took 1m58.4551259s to LocalClient.Create
	I0716 18:49:58.866917    2528 start.go:167] duration metric: took 1m58.4551259s to libmachine.API.Create "multinode-343600"
	I0716 18:49:58.866917    2528 start.go:293] postStartSetup for "multinode-343600-m02" (driver="hyperv")
	I0716 18:49:58.867643    2528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0716 18:49:58.882162    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0716 18:49:58.882162    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:01.054527    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:01.055223    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:03.638810    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:03.639114    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:03.750228    2528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8680484s)
	I0716 18:50:03.763257    2528 ssh_runner.go:195] Run: cat /etc/os-release
	I0716 18:50:03.771788    2528 command_runner.go:130] > NAME=Buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0716 18:50:03.771788    2528 command_runner.go:130] > ID=buildroot
	I0716 18:50:03.771788    2528 command_runner.go:130] > VERSION_ID=2023.02.9
	I0716 18:50:03.771881    2528 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0716 18:50:03.771881    2528 info.go:137] Remote host: Buildroot 2023.02.9
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0716 18:50:03.771881    2528 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0716 18:50:03.773360    2528 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> 47402.pem in /etc/ssl/certs
	I0716 18:50:03.773360    2528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem -> /etc/ssl/certs/47402.pem
	I0716 18:50:03.786672    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0716 18:50:03.806799    2528 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\47402.pem --> /etc/ssl/certs/47402.pem (1708 bytes)
	I0716 18:50:03.858135    2528 start.go:296] duration metric: took 4.9911999s for postStartSetup
	I0716 18:50:03.861694    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:06.003555    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:06.003780    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:08.584946    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:08.585615    2528 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-343600\config.json ...
	I0716 18:50:08.588648    2528 start.go:128] duration metric: took 2m8.1806947s to createHost
	I0716 18:50:08.588758    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:10.803052    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:10.804146    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:13.403213    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:13.403275    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:13.409344    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:13.409519    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:13.409519    2528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0716 18:50:13.548785    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181013.550580230
	
	I0716 18:50:13.548883    2528 fix.go:216] guest clock: 1721181013.550580230
	I0716 18:50:13.548883    2528 fix.go:229] Guest: 2024-07-16 18:50:13.55058023 -0700 PDT Remote: 2024-07-16 18:50:08.5887187 -0700 PDT m=+352.495185101 (delta=4.96186153s)
	I0716 18:50:13.549013    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:15.666580    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:15.667105    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:18.223947    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:18.230519    2528 main.go:141] libmachine: Using SSH client type: native
	I0716 18:50:18.231289    2528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4fa9e0] 0x4fd5c0 <nil>  [] 0s} 172.27.171.221 22 <nil> <nil>}
	I0716 18:50:18.231289    2528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721181013
	I0716 18:50:18.382796    2528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 17 01:50:13 UTC 2024
	
	I0716 18:50:18.382905    2528 fix.go:236] clock set: Wed Jul 17 01:50:13 UTC 2024
	 (err=<nil>)
	I0716 18:50:18.382970    2528 start.go:83] releasing machines lock for "multinode-343600-m02", held for 2m17.9751934s
	I0716 18:50:18.383229    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:20.594463    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:23.178317    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:23.180855    2528 out.go:177] * Found network options:
	I0716 18:50:23.184410    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.187221    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.189465    2528 out.go:177]   - NO_PROXY=172.27.170.61
	W0716 18:50:23.192015    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	W0716 18:50:23.193586    2528 proxy.go:119] fail to check proxy env: Error ip not in block
	I0716 18:50:23.196267    2528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0716 18:50:23.196363    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:23.206583    2528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0716 18:50:23.206583    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-343600-m02 ).state
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.472547    2528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0716 18:50:25.473539    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:25.473748    2528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-343600-m02 ).networkadapters[0]).ipaddresses[0]
	I0716 18:50:28.172413    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.173331    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.173550    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.198874    2528 main.go:141] libmachine: [stdout =====>] : 172.27.171.221
	
	I0716 18:50:28.199782    2528 main.go:141] libmachine: [stderr =====>] : 
	I0716 18:50:28.200135    2528 sshutil.go:53] new ssh client: &{IP:172.27.171.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-343600-m02\id_rsa Username:docker}
	I0716 18:50:28.265809    2528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0716 18:50:28.266290    2528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0699162s)
	W0716 18:50:28.266290    2528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0716 18:50:28.301226    2528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0716 18:50:28.301964    2528 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0952192s)
	W0716 18:50:28.301964    2528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0716 18:50:28.314174    2528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0716 18:50:28.344876    2528 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0716 18:50:28.344876    2528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0716 18:50:28.344876    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:28.344876    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0716 18:50:28.381797    2528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0716 18:50:28.381936    2528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0716 18:50:28.387424    2528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0716 18:50:28.398601    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0716 18:50:28.433994    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0716 18:50:28.454670    2528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0716 18:50:28.467851    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0716 18:50:28.503424    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.534988    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0716 18:50:28.570699    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0716 18:50:28.602905    2528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0716 18:50:28.634739    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0716 18:50:28.665437    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0716 18:50:28.698121    2528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0716 18:50:28.729807    2528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0716 18:50:28.749975    2528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0716 18:50:28.761923    2528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0716 18:50:28.795043    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:28.999182    2528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0716 18:50:29.030257    2528 start.go:495] detecting cgroup driver to use...
	I0716 18:50:29.043346    2528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0716 18:50:29.092972    2528 command_runner.go:130] > [Unit]
	I0716 18:50:29.093076    2528 command_runner.go:130] > Description=Docker Application Container Engine
	I0716 18:50:29.093076    2528 command_runner.go:130] > Documentation=https://docs.docker.com
	I0716 18:50:29.093076    2528 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0716 18:50:29.093076    2528 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitBurst=3
	I0716 18:50:29.093076    2528 command_runner.go:130] > StartLimitIntervalSec=60
	I0716 18:50:29.093076    2528 command_runner.go:130] > [Service]
	I0716 18:50:29.093164    2528 command_runner.go:130] > Type=notify
	I0716 18:50:29.093164    2528 command_runner.go:130] > Restart=on-failure
	I0716 18:50:29.093164    2528 command_runner.go:130] > Environment=NO_PROXY=172.27.170.61
	I0716 18:50:29.093164    2528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0716 18:50:29.093164    2528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0716 18:50:29.093164    2528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0716 18:50:29.093164    2528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0716 18:50:29.093164    2528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0716 18:50:29.093164    2528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0716 18:50:29.093164    2528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0716 18:50:29.093164    2528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0716 18:50:29.093164    2528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNOFILE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitNPROC=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > LimitCORE=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0716 18:50:29.093164    2528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0716 18:50:29.093164    2528 command_runner.go:130] > TasksMax=infinity
	I0716 18:50:29.093164    2528 command_runner.go:130] > TimeoutStartSec=0
	I0716 18:50:29.093164    2528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0716 18:50:29.093164    2528 command_runner.go:130] > Delegate=yes
	I0716 18:50:29.093164    2528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0716 18:50:29.093164    2528 command_runner.go:130] > KillMode=process
	I0716 18:50:29.093164    2528 command_runner.go:130] > [Install]
	I0716 18:50:29.093164    2528 command_runner.go:130] > WantedBy=multi-user.target
	I0716 18:50:29.107245    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.146878    2528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0716 18:50:29.195675    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0716 18:50:29.233550    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.273295    2528 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0716 18:50:29.339804    2528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0716 18:50:29.363714    2528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0716 18:50:29.396425    2528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0716 18:50:29.409706    2528 ssh_runner.go:195] Run: which cri-dockerd
	I0716 18:50:29.415783    2528 command_runner.go:130] > /usr/bin/cri-dockerd
	I0716 18:50:29.429393    2528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0716 18:50:29.446570    2528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0716 18:50:29.491078    2528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0716 18:50:29.691289    2528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0716 18:50:29.877683    2528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0716 18:50:29.877918    2528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0716 18:50:29.923167    2528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0716 18:50:30.134425    2528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0716 18:51:31.260709    2528 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0716 18:51:31.261095    2528 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0716 18:51:31.261355    2528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1257325s)
	I0716 18:51:31.275246    2528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0716 18:51:31.303210    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.303633    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	I0716 18:51:31.303702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	I0716 18:51:31.303781    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0716 18:51:31.303972    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0716 18:51:31.304057    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0716 18:51:31.304131    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304221    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304290    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304510    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304605    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304683    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304759    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.304977    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0716 18:51:31.305054    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0716 18:51:31.305129    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0716 18:51:31.305215    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	I0716 18:51:31.305288    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0716 18:51:31.305353    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0716 18:51:31.305425    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0716 18:51:31.305501    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0716 18:51:31.305586    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0716 18:51:31.305760    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0716 18:51:31.305802    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0716 18:51:31.305850    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.305956    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306055    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306127    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306209    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306282    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0716 18:51:31.306345    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306414    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306497    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306596    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306658    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306738    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306830    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306890    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.306965    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307029    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307162    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0716 18:51:31.307204    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307262    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307350    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307472    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0716 18:51:31.307545    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0716 18:51:31.307616    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0716 18:51:31.307702    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0716 18:51:31.307770    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0716 18:51:31.307839    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0716 18:51:31.307906    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0716 18:51:31.307996    2528 command_runner.go:130] > Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	I0716 18:51:31.308082    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0716 18:51:31.308146    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	I0716 18:51:31.308213    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0716 18:51:31.308304    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0716 18:51:31.308371    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	I0716 18:51:31.308441    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	I0716 18:51:31.308526    2528 command_runner.go:130] > Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0716 18:51:31.308715    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	I0716 18:51:31.308795    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	I0716 18:51:31.308884    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0716 18:51:31.308973    2528 command_runner.go:130] > Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0716 18:51:31.318841    2528 out.go:177] 
	W0716 18:51:31.321802    2528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 01:49:57 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.192412290Z" level=info msg="Starting up"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.193449100Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 01:49:57 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:57.194829714Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.227782944Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254064107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254176808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254242909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254354010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254532012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254568212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254757814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254887815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254909315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.254920616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255042017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.255403720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258730854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.258941856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259233259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259334460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259456261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.259645163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294536412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294749314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294796015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294818315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.294835115Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295198819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295488722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295610923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295654023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295693324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295714024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295729224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295742524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295757224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295772125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295785325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295799625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295812725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295834625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295849825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295863525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295877526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295892526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295906626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.295919326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296004827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296048727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296069027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296149928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296168528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296182729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296217729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296287030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296336130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296382131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296399931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296413531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296424231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296446931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296462331Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.296978437Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297057337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297185639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 01:49:57 multinode-343600-m02 dockerd[672]: time="2024-07-17T01:49:57.297467241Z" level=info msg="containerd successfully booted in 0.071653s"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.264532379Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.296848283Z" level=info msg="Loading containers: start."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.467133881Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.720461023Z" level=info msg="Loading containers: done."
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745591452Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.745768953Z" level=info msg="Daemon has completed initialization"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867722462Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 01:49:58 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:49:58.867844264Z" level=info msg="API listen on [::]:2376"
	Jul 17 01:49:58 multinode-343600-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 01:50:30 multinode-343600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.168036457Z" level=info msg="Processing signal 'terminated'"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169603657Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.169952957Z" level=info msg="Daemon shutdown complete"
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170053257Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 01:50:30 multinode-343600-m02 dockerd[665]: time="2024-07-17T01:50:30.170076557Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 01:50:31 multinode-343600-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 01:50:31 multinode-343600-m02 dockerd[1074]: time="2024-07-17T01:50:31.236909345Z" level=info msg="Starting up"
	Jul 17 01:51:31 multinode-343600-m02 dockerd[1074]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 01:51:31 multinode-343600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0716 18:51:31.322160    2528 out.go:239] * 
	W0716 18:51:31.323532    2528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0716 18:51:31.326510    2528 out.go:177] 
	
	
	==> Docker <==
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.441322760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.444803881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445203261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445465247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.445870326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a26feccaa68b679c2f6d00f614e4adf2cc5bf98906509bdec1747e2d39c02fd/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:47:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3b8fefc458b2998e43b437af90048c24ba22c2d1a0b9d79d04dc11d3de628f4/resolv.conf as [nameserver 172.27.160.1]"
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819872204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819962798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.819988196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.820116987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951064604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.951849251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.952062036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:47:57 multinode-343600 dockerd[1441]: time="2024-07-17T01:47:57.953861614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336423189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336625889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336741790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:07.336832990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:07 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e933ef2daad4364897479f1d4f6dd2faf79a854c01e8e9af2ac4b320898cb5f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 01:52:09 multinode-343600 cri-dockerd[1332]: time="2024-07-17T01:52:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353261558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353669157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.353691157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 01:52:09 multinode-343600 dockerd[1441]: time="2024-07-17T01:52:09.354089456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb7b6f4d3bd7f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   e933ef2daad43       busybox-fc5497c4f-9zzvz
	832a042d8e687       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   b3b8fefc458b2       coredns-7db6d8ff4d-mmfw4
	a5100a7b9d171       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   1a26feccaa68b       storage-provisioner
	553740a819161       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              27 minutes ago      Running             kindnet-cni               0                   e33a722a67030       kindnet-wlznl
	570cf9cf23df5       53c535741fb44                                                                                         27 minutes ago      Running             kube-proxy                0                   6f93a2ff5382c       kube-proxy-rzpvp
	09c2d66cab0fa       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   65e58842a300d       etcd-multinode-343600
	11399272ac43d       56ce0fd9fb532                                                                                         28 minutes ago      Running             kube-apiserver            0                   65d102f6b5028       kube-apiserver-multinode-343600
	5ae79ae87bad6       e874818b3caac                                                                                         28 minutes ago      Running             kube-controller-manager   0                   7b34dafe3c26e       kube-controller-manager-multinode-343600
	bf07a7b3f6ff7       7820c83aa1394                                                                                         28 minutes ago      Running             kube-scheduler            0                   17f0e856743b6       kube-scheduler-multinode-343600
	
	
	==> coredns [832a042d8e68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a580c971ab07dc93aa6f80c1f806e93c54050ff2efd4e9ce923b4c4049d8d47e9742d783be41d125889e68889d7d347458195b3c017bc916296a45beab62f517
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36850 - 30152 "HINFO IN 3533822944047288697.5146741808055306575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046392232s
	[INFO] 10.244.0.3:60325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249894s
	[INFO] 10.244.0.3:49103 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185058091s
	[INFO] 10.244.0.3:40233 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040129057s
	[INFO] 10.244.0.3:53435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.056299346s
	[INFO] 10.244.0.3:52034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177795s
	[INFO] 10.244.0.3:55399 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037734119s
	[INFO] 10.244.0.3:55087 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260193s
	[INFO] 10.244.0.3:47273 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232394s
	[INFO] 10.244.0.3:48029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.115999484s
	[INFO] 10.244.0.3:49805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126996s
	[INFO] 10.244.0.3:42118 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112698s
	[INFO] 10.244.0.3:50779 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153196s
	[INFO] 10.244.0.3:49493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098397s
	[INFO] 10.244.0.3:36336 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160395s
	[INFO] 10.244.0.3:37610 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068999s
	[INFO] 10.244.0.3:51523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052899s
	[INFO] 10.244.0.3:49356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333991s
	[INFO] 10.244.0.3:39090 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137797s
	[INFO] 10.244.0.3:50560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244893s
	[INFO] 10.244.0.3:44091 - 5 "PTR IN 1.160.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164296s
	
	
	==> describe nodes <==
	Name:               multinode-343600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_16T18_47_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:12:49 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:12:49 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:12:49 +0000   Wed, 17 Jul 2024 01:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:12:49 +0000   Wed, 17 Jul 2024 01:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.170.61
	  Hostname:    multinode-343600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fe83095ab54b17906d94b7ce51f643
	  System UUID:                218d91af-3626-904d-8a44-fc7be5676dd3
	  Boot ID:                    b2e70455-4eaa-4636-bbcb-fe6d155d3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9zzvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-mmfw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-343600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-wlznl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-343600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-multinode-343600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-rzpvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-343600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node multinode-343600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node multinode-343600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node multinode-343600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node multinode-343600 event: Registered Node multinode-343600 in Controller
	  Normal  NodeReady                27m   kubelet          Node multinode-343600 status is now: NodeReady
	
	
	Name:               multinode-343600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-343600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-343600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_16T19_07_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:07:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-343600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:11:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 02:08:49 +0000   Wed, 17 Jul 2024 02:11:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.173.202
	  Hostname:    multinode-343600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c97ec282efd48b88cab0b67f2c8f7c2
	  System UUID:                bad18aee-b3d1-0c44-b82f-1f20fb05d065
	  Boot ID:                    33c029cd-4782-43da-a050-56424fd1feae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwt6c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-ghs2x              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m33s
	  kube-system                 kube-proxy-4bg7x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m33s (x2 over 7m33s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x2 over 7m33s)  kubelet          Node multinode-343600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x2 over 7m33s)  kubelet          Node multinode-343600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m29s                  node-controller  Node multinode-343600-m03 event: Registered Node multinode-343600-m03 in Controller
	  Normal  NodeReady                7m4s                   kubelet          Node multinode-343600-m03 status is now: NodeReady
	  Normal  NodeNotReady             3m39s                  node-controller  Node multinode-343600-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.959886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:46] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.179558] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.392251] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.107477] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.605894] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.222043] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.870405] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.184324] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.180543] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.266230] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[Jul17 01:47] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +0.102407] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.735479] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.605992] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.112720] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.553262] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.146767] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.979240] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.262681] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.810088] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:52] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [09c2d66cab0f] <==
	{"level":"info","ts":"2024-07-17T02:07:51.533931Z","caller":"traceutil/trace.go:171","msg":"trace[462829157] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"230.454648ms","start":"2024-07-17T02:07:51.303457Z","end":"2024-07-17T02:07:51.533912Z","steps":["trace[462829157] 'process raft request'  (duration: 230.337651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.534107Z","caller":"traceutil/trace.go:171","msg":"trace[2024600941] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1700; }","duration":"209.685912ms","start":"2024-07-17T02:07:51.324411Z","end":"2024-07-17T02:07:51.534097Z","steps":["trace[2024600941] 'read index received'  (duration: 209.681812ms)","trace[2024600941] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.534885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.788109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T02:07:51.53521Z","caller":"traceutil/trace.go:171","msg":"trace[1749208603] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1438; }","duration":"210.773183ms","start":"2024-07-17T02:07:51.324407Z","end":"2024-07-17T02:07:51.53518Z","steps":["trace[1749208603] 'agreement among raft nodes before linearized reading'  (duration: 209.719411ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:07:51.684235Z","caller":"traceutil/trace.go:171","msg":"trace[1696915811] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"315.91493ms","start":"2024-07-17T02:07:51.3683Z","end":"2024-07-17T02:07:51.684215Z","steps":["trace[1696915811] 'process raft request'  (duration: 269.338893ms)","trace[1696915811] 'compare'  (duration: 46.000452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:07:51.684483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.073221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:07:51.684879Z","caller":"traceutil/trace.go:171","msg":"trace[788779948] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1440; }","duration":"154.559007ms","start":"2024-07-17T02:07:51.530309Z","end":"2024-07-17T02:07:51.684868Z","steps":["trace[788779948] 'agreement among raft nodes before linearized reading'  (duration: 153.972223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:07:51.686157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T02:07:51.368284Z","time spent":"316.016028ms","remote":"127.0.0.1:54094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-343600-m03\" mod_revision:1435 > success:<request_put:<key:\"/registry/minions/multinode-343600-m03\" value_size:2787 >> failure:<request_range:<key:\"/registry/minions/multinode-343600-m03\" > >"}
	{"level":"info","ts":"2024-07-17T02:07:51.684259Z","caller":"traceutil/trace.go:171","msg":"trace[733279489] linearizableReadLoop","detail":"{readStateIndex:1701; appliedIndex:1700; }","duration":"149.085956ms","start":"2024-07-17T02:07:51.535161Z","end":"2024-07-17T02:07:51.684247Z","steps":["trace[733279489] 'read index received'  (duration: 102.314225ms)","trace[733279489] 'applied index is now lower than readState.Index'  (duration: 46.770731ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:57.933889Z","caller":"traceutil/trace.go:171","msg":"trace[1157037549] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"134.713343ms","start":"2024-07-17T02:07:57.799153Z","end":"2024-07-17T02:07:57.933866Z","steps":["trace[1157037549] 'process raft request'  (duration: 118.150293ms)","trace[1157037549] 'compare'  (duration: 16.437454ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.084008Z","caller":"traceutil/trace.go:171","msg":"trace[861469173] transaction","detail":"{read_only:false; response_revision:1449; number_of_response:1; }","duration":"191.891891ms","start":"2024-07-17T02:07:57.892075Z","end":"2024-07-17T02:07:58.083967Z","steps":["trace[861469173] 'process raft request'  (duration: 162.879779ms)","trace[861469173] 'compare'  (duration: 28.877116ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T02:07:58.281477Z","caller":"traceutil/trace.go:171","msg":"trace[1029922395] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"152.699855ms","start":"2024-07-17T02:07:58.128759Z","end":"2024-07-17T02:07:58.281459Z","steps":["trace[1029922395] 'process raft request'  (duration: 73.524105ms)","trace[1029922395] 'compare'  (duration: 78.894858ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:08:02.438563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.888134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-343600-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-17T02:08:02.438671Z","caller":"traceutil/trace.go:171","msg":"trace[1739914459] range","detail":"{range_begin:/registry/minions/multinode-343600-m03; range_end:; response_count:1; response_revision:1459; }","duration":"183.056129ms","start":"2024-07-17T02:08:02.255602Z","end":"2024-07-17T02:08:02.438658Z","steps":["trace[1739914459] 'range keys from in-memory index tree'  (duration: 182.583642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T02:08:02.438582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.136257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-17T02:08:02.439152Z","caller":"traceutil/trace.go:171","msg":"trace[89915440] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1459; }","duration":"134.726841ms","start":"2024-07-17T02:08:02.304415Z","end":"2024-07-17T02:08:02.439141Z","steps":["trace[89915440] 'range keys from in-memory index tree'  (duration: 133.989162ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:08:02.583228Z","caller":"traceutil/trace.go:171","msg":"trace[1380485395] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"136.847484ms","start":"2024-07-17T02:08:02.44636Z","end":"2024-07-17T02:08:02.583207Z","steps":["trace[1380485395] 'process raft request'  (duration: 136.606391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:24.483596Z","caller":"traceutil/trace.go:171","msg":"trace[182214649] transaction","detail":"{read_only:false; response_revision:1658; number_of_response:1; }","duration":"179.381042ms","start":"2024-07-17T02:11:24.304195Z","end":"2024-07-17T02:11:24.483576Z","steps":["trace[182214649] 'process raft request'  (duration: 179.23744ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:25.634418Z","caller":"traceutil/trace.go:171","msg":"trace[1300292607] linearizableReadLoop","detail":"{readStateIndex:1964; appliedIndex:1963; }","duration":"103.613334ms","start":"2024-07-17T02:11:25.530788Z","end":"2024-07-17T02:11:25.634401Z","steps":["trace[1300292607] 'read index received'  (duration: 103.552533ms)","trace[1300292607] 'applied index is now lower than readState.Index'  (duration: 60.201µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:11:25.634824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.037741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T02:11:25.634917Z","caller":"traceutil/trace.go:171","msg":"trace[1757730791] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1659; }","duration":"104.269544ms","start":"2024-07-17T02:11:25.530637Z","end":"2024-07-17T02:11:25.634907Z","steps":["trace[1757730791] 'agreement among raft nodes before linearized reading'  (duration: 103.955939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:11:25.635118Z","caller":"traceutil/trace.go:171","msg":"trace[1848997321] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"162.547863ms","start":"2024-07-17T02:11:25.472557Z","end":"2024-07-17T02:11:25.635105Z","steps":["trace[1848997321] 'process raft request'  (duration: 161.70205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T02:12:16.670261Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1378}
	{"level":"info","ts":"2024-07-17T02:12:16.680696Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1378,"took":"9.552517ms","hash":629436316,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1712128,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-17T02:12:16.680812Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":629436316,"revision":1378,"compact-revision":1137}
	
	
	==> kernel <==
	 02:15:20 up 30 min,  0 users,  load average: 0.27, 0.38, 0.34
	Linux multinode-343600 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [553740a81916] <==
	I0717 02:14:14.275245       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:14:24.279799       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:14:24.279918       1 main.go:303] handling current node
	I0717 02:14:24.279937       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:14:24.279945       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:14:34.281206       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:14:34.281324       1 main.go:303] handling current node
	I0717 02:14:34.281343       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:14:34.281351       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:14:44.271706       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:14:44.271809       1 main.go:303] handling current node
	I0717 02:14:44.271829       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:14:44.271837       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:14:54.274895       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:14:54.274966       1 main.go:303] handling current node
	I0717 02:14:54.275228       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:14:54.275381       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:15:04.280868       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:15:04.281144       1 main.go:303] handling current node
	I0717 02:15:04.281168       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:15:04.281178       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	I0717 02:15:14.275793       1 main.go:299] Handling node with IPs: map[172.27.170.61:{}]
	I0717 02:15:14.275913       1 main.go:303] handling current node
	I0717 02:15:14.276045       1 main.go:299] Handling node with IPs: map[172.27.173.202:{}]
	I0717 02:15:14.276099       1 main.go:326] Node multinode-343600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [11399272ac43] <==
	I0717 01:47:18.564079       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:47:18.582648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:47:18.585440       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 01:47:18.585733       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:47:18.651260       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:47:19.444286       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 01:47:19.466622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 01:47:19.466657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:47:20.693765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:47:20.783852       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:47:20.890710       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 01:47:20.909718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.170.61]
	I0717 01:47:20.910861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:47:20.919109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:47:21.504448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:47:22.015050       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:47:22.056694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 01:47:22.089969       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:47:36.596396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 01:47:36.860488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 02:03:34.189300       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49832: use of closed network connection
	E0717 02:03:35.136967       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49837: use of closed network connection
	E0717 02:03:35.880019       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49842: use of closed network connection
	E0717 02:04:11.454010       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49860: use of closed network connection
	E0717 02:04:21.903848       1 conn.go:339] Error on socket receive: read tcp 172.27.170.61:8443->172.27.160.1:49862: use of closed network connection
	
	
	==> kube-controller-manager [5ae79ae87bad] <==
	I0717 01:47:37.831661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.517336ms"
	I0717 01:47:37.861371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.662577ms"
	I0717 01:47:37.863877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.298µs"
	I0717 01:47:56.816181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.495µs"
	I0717 01:47:56.864670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.098µs"
	I0717 01:47:58.742434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.678µs"
	I0717 01:47:58.803685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.362227ms"
	I0717 01:47:58.803772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.192µs"
	I0717 01:48:01.059973       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 01:52:06.859031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.949838ms"
	I0717 01:52:06.876210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.855684ms"
	I0717 01:52:06.899379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.106015ms"
	I0717 01:52:06.899571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0717 01:52:09.997094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.053979ms"
	I0717 01:52:09.999036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0717 02:07:47.450050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-343600-m03\" does not exist"
	I0717 02:07:47.466038       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-343600-m03" podCIDRs=["10.244.1.0/24"]
	I0717 02:07:51.299816       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-343600-m03"
	I0717 02:08:16.479927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-343600-m03"
	I0717 02:08:16.519666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.098µs"
	I0717 02:08:16.544360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.099µs"
	I0717 02:08:19.303837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.225114ms"
	I0717 02:08:19.305728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.099µs"
	I0717 02:11:41.458932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.766706ms"
	I0717 02:11:41.459469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.503µs"
	
	
	==> kube-proxy [570cf9cf23df] <==
	I0717 01:47:38.257677       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:47:38.281444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.170.61"]
	I0717 01:47:38.383907       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:47:38.384157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:47:38.384180       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:47:38.388773       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:47:38.389313       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:47:38.389383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:47:38.391493       1 config.go:192] "Starting service config controller"
	I0717 01:47:38.391571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:47:38.391600       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:47:38.391612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:47:38.404800       1 config.go:319] "Starting node config controller"
	I0717 01:47:38.404815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:47:38.492818       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:47:38.492829       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:47:38.505297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf07a7b3f6ff] <==
	W0717 01:47:19.505573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:47:19.505852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:47:19.514675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:47:19.514778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:47:19.559545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.559989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.609827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:47:19.610232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:47:19.619601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.619701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:19.734485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:47:19.735115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:47:19.765473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:47:19.765662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:47:19.858003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:47:19.858061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:47:20.056123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:47:20.056396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:47:20.057222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:47:20.057591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 01:47:20.139260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:47:20.139625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:47:20.148448       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:47:20.148766       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 01:47:21.778160       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:10:22 multinode-343600 kubelet[2292]: E0717 02:10:22.203113    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:10:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:11:22 multinode-343600 kubelet[2292]: E0717 02:11:22.204341    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:11:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:12:22 multinode-343600 kubelet[2292]: E0717 02:12:22.201086    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:12:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:13:22 multinode-343600 kubelet[2292]: E0717 02:13:22.202923    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:13:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:13:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:13:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:13:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:14:22 multinode-343600 kubelet[2292]: E0717 02:14:22.201601    2292 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:14:22 multinode-343600 kubelet[2292]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:14:22 multinode-343600 kubelet[2292]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:14:22 multinode-343600 kubelet[2292]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:14:22 multinode-343600 kubelet[2292]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:15:12.728079    4972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-343600 -n multinode-343600: (12.3844488s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-343600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (164.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-673300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-673300 --driver=hyperv: exit status 1 (4m59.8230486s)

                                                
                                                
-- stdout --
	* [NoKubernetes-673300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-673300" primary control-plane node in "NoKubernetes-673300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:32:44.492995    6032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-673300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-673300 -n NoKubernetes-673300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-673300 -n NoKubernetes-673300: exit status 7 (3.6052882s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:37:44.305996    3508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0716 19:37:47.762322    3508 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-673300".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-673300 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-673300:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-673300" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10800.427s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-293500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (32m46s)
	TestNetworkPlugins/group/auto (5m13s)
	TestNetworkPlugins/group/auto/Start (5m13s)
	TestNetworkPlugins/group/calico (17s)
	TestNetworkPlugins/group/calico/Start (17s)
	TestNetworkPlugins/group/kindnet (3m20s)
	TestNetworkPlugins/group/kindnet/Start (3m20s)
	TestPause (8m42s)
	TestPause/serial (8m42s)
	TestPause/serial/SecondStartNoReconfiguration (1m54s)
	TestStartStop (8m42s)

                                                
                                                
goroutine 2321 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00013b1e0, 0xc00099fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008302e8, {0x48a70e0, 0x2a, 0x2a}, {0x25003da?, 0x3380cf?, 0x48ca4a0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006a6dc0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006a6dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 27 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005b6300)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 861 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001953b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 908
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2297 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002ac4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002ac4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002ac4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002ac4e0, 0xc0008341c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2093 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0013d01a0, 0xc000ba86a8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1984
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2318 [syscall, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc000669b20?, 0x238c870?, 0xc000669b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc000669bf8?, 0x2829a5?, 0x1d51a8e0a28?, 0x41?, 0x278ba6?, 0xc00082ff80?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x270, {0xc000973dec?, 0x214, 0x3341df?}, 0x28aefd?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001b32288?, {0xc000973dec?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001b32288, {0xc000973dec, 0x214, 0x214})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0001121b8, {0xc000973dec?, 0x1d560063b68?, 0x69?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001568390, {0x34dae80, 0xc000b88108})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc001568390}, {0x34dae80, 0xc000b88108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000669e78?, {0x34dafc0, 0xc001568390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc001568390?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc001568390}, {0x34daf40, 0xc0001121b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001c89080?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2317
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2094 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0013d0340, {0x24a4339?, 0x34d4e78?}, 0xc001569d40)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0013d0340, 0xc001744080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2296 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002ac340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002ac340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002ac340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002ac340, 0xc000834180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1061 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a59980, 0xc001c88de0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 899
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2334 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7fff77244e10?, {0xc001373bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6d0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007d4960)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a06000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000a06000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00013bba0, 0xc000a06000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc00013bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc00013bba0, 0xc001569d40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2094
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2336 [syscall, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc0019c1b20?, 0x238c870?, 0xc0019c1b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc0019c1bf8?, 0x2829a5?, 0x1d51a8e0598?, 0x10000?, 0x1?, 0xc001548f70?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x718, {0xc0013c8110?, 0x5ef0, 0x3341df?}, 0x10?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001644508?, {0xc0013c8110?, 0x10000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001644508, {0xc0013c8110, 0x5ef0, 0x5ef0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6ee0, {0xc0013c8110?, 0x3d25?, 0x7e45?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001569e30, {0x34dae80, 0xc000b88a48})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc001569e30}, {0x34dae80, 0xc000b88a48}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x24b5a77?, {0x34dafc0, 0xc001569e30})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc001569e30?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc001569e30}, {0x34daf40, 0xc0000a6ee0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x2f84408?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2334
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 166 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007d84e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2316 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000123200, 0xc001770480)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2355 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc000999b20?, 0x235be80?, 0xc000999b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc000999bf8?, 0x2829a5?, 0x0?, 0x0?, 0x0?, 0xc000999bd8?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a0, {0xc001363d47?, 0x2b9, 0x3341df?}, 0xc000999c04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001556f08?, {0xc001363d47?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001556f08, {0xc001363d47, 0x2b9, 0x2b9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6cd8, {0xc001363d47?, 0xc000999d98?, 0xe23?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000964810, {0x34dae80, 0xc0005bca78})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc000964810}, {0x34dae80, 0xc0005bca78}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34dafc0, 0xc000964810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc000964810?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc000964810}, {0x34daf40, 0xc0000a6cd8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0009a0000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2289
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 862 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000553b00, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 908
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2298 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002ac680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002ac680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002ac680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002ac680, 0xc000834200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2319 [syscall, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc00136fb20?, 0x238c870?, 0xc00136fb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc00136fbf8?, 0x2829a5?, 0x1d51a8e0eb8?, 0x67?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3d0, {0xc0007cfd2e?, 0x2d2, 0x3341df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001b32788?, {0xc0007cfd2e?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001b32788, {0xc0007cfd2e, 0x2d2, 0x2d2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000112240, {0xc0007cfd2e?, 0x1d51a8eda88?, 0xe70?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015683c0, {0x34dae80, 0xc0000a67a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc0015683c0}, {0x34dae80, 0xc0000a67a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34dafc0, 0xc0015683c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc0015683c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc0015683c0}, {0x34daf40, 0xc000112240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2317
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 167 [chan receive, 172 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008343c0, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 192 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000834390, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1f97c60?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0007d83c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008343c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005ba890, {0x34dc2c0, 0xc0002a56e0}, 0x1, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005ba890, 0x3b9aca00, 0x0, 0x1, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 193 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34fff00, 0xc000054360}, 0xc001345f50, 0xc001345f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34fff00, 0xc000054360}, 0x61?, 0xc001345f50, 0xc001345f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34fff00?, 0xc000054360?}, 0x6136626632656531?, 0x6437393661393038?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x20736d3030353a79?, 0x3a74756f656d6954?, 0x61432073306d3031?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 194 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2295 [chan receive, 9 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0002ac000, 0x2f84710)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2087
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 762 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x1d560005c80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x28fdf6?, 0x4957900?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001645ba0, 0xc00133fbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc001645b88, 0x304, {0xc00087c000?, 0x0?, 0x0?}, 0xc000bfd008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc001645b88, 0xc00133fd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc001645b88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0016321e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0016321e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007ee0f0, {0x34f2fc0, 0xc0016321e0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0007ee0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00013b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 759
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2335 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc001371b20?, 0x23418b0?, 0xc001371b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc001371bf8?, 0x28283b?, 0x1d51a8e0a28?, 0x41?, 0x278ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6f0, {0xc00143ba25?, 0x5db, 0x3341df?}, 0xc001371c04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001644008?, {0xc00143ba25?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001644008, {0xc00143ba25, 0x5db, 0x5db})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6eb0, {0xc00143ba25?, 0xc001371d98?, 0x224?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001569e00, {0x34dae80, 0xc000112130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc001569e00}, {0x34dae80, 0xc000112130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34dafc0, 0xc001569e00})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc001569e00?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc001569e00}, {0x34daf40, 0xc0000a6eb0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2334
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2114 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d09c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d09c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d09c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d09c0, 0xc001744280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2050 [chan receive, 9 minutes]:
testing.(*T).Run(0xc000b9eea0, {0x24a5856?, 0xd18c2e2800?}, 0xc00142a000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc000b9eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc000b9eea0, 0x2f84508)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2317 [syscall, locked to thread]:
syscall.SyscallN(0x7fff77244e10?, {0xc0017afbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6ec, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007d4990)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a06180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000a06180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0002ad1e0, 0xc000a06180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0002ad1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0002ad1e0, 0xc001568090)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1984 [chan receive, 33 minutes]:
testing.(*T).Run(0xc000b9e340, {0x24a4334?, 0x2ef4ad?}, 0xc000ba86a8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000b9e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000b9e340, 0x2f844f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2117 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d0ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d0ea0, 0xc001744480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2115 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0013d0b60, {0x24a4339?, 0x34d4e78?}, 0xc0009640c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0013d0b60, 0xc001744300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 923 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000553ad0, 0x36)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1f97c60?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001953a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000553b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001463310, {0x34dc2c0, 0xc00134eb70}, 0x1, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001463310, 0x3b9aca00, 0x0, 0x1, 0xc000054360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 924 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34fff00, 0xc000054360}, 0xc0015b7f50, 0xc0015b7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34fff00, 0xc000054360}, 0x0?, 0xc0015b7f50, 0xc0015b7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34fff00?, 0xc000054360?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 925 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 924
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2320 [select]:
os/exec.(*Cmd).watchCtx(0xc000a06180, 0xc0002227e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2317
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2313 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7fff77244e10?, {0xc00006ba10?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x668, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007d4900)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000123200)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000123200)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0002acd00, 0xc000123200)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x34ffd40, 0xc0003fc230}, 0xc0002acd00, {0xc0015ce090?, 0xc0000a1d10?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0002acd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0002acd00, 0xc000834400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2278
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2087 [chan receive, 9 minutes]:
testing.(*T).Run(0xc000b9f520, {0x24a4334?, 0x3c73d3?}, 0x2f84710)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000b9f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000b9f520, 0x2f84538)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2095 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d04e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d04e0, 0xc001744100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1280 [chan send, 146 minutes]:
os/exec.(*Cmd).watchCtx(0xc000123e00, 0xc001c0b0e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1181
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2315 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc0015cbb20?, 0x23410b0?, 0xc0015cbb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc0015cbbf8?, 0x2829a5?, 0x0?, 0x0?, 0x0?, 0xc001548c30?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6b8, {0xc0009a2795?, 0x186b, 0x3341df?}, 0xc001c35340?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001557188?, {0xc0009a2795?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001557188, {0xc0009a2795, 0x186b, 0x186b})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00079e068, {0xc0009a2795?, 0xc0015cbd98?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015681e0, {0x34dae80, 0xc000112150})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc0015681e0}, {0x34dae80, 0xc000112150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34dafc0, 0xc0015681e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc0015681e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc0015681e0}, {0x34daf40, 0xc00079e068}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000802500?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2118 [chan receive]:
testing.(*T).Run(0xc0013d1040, {0x24a4339?, 0x34d4e78?}, 0xc001568090)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0013d1040, 0xc001744500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2096 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d0680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d0680, 0xc001744180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2116 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d0d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d0d00, 0xc001744400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2097 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d0820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d0820, 0xc001744200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2093
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2301 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002acea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002acea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002acea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002acea0, 0xc000834300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2300 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002acb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002acb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002acb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002acb60, 0xc000834280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2278 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00013b6c0, {0x24e3639?, 0x24?}, 0xc000834400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc00013b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc00013b6c0, 0xc00142a000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2050
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2354 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc001885b20?, 0x2352568?, 0xc001885b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc001885bf8?, 0x28283b?, 0x1d51a8e0eb8?, 0x41?, 0x278ba6?, 0xc001703f80?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6c8, {0xc00087a9ef?, 0x211, 0x3341df?}, 0x28aefd?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001556788?, {0xc00087a9ef?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001556788, {0xc00087a9ef, 0x211, 0x211})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a68f0, {0xc00087a9ef?, 0x1d51a9d1cc8?, 0x6a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009646c0, {0x34dae80, 0xc000112210})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc0009646c0}, {0x34dae80, 0xc000112210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34dafc0, 0xc0009646c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc0009646c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc0009646c0}, {0x34daf40, 0xc0000a68f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0017706c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2289
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2299 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc000b8c870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002ac9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002ac9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0002ac9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0002ac9c0, 0xc000834240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2295
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2314 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x297ec5?, {0xc001777b20?, 0x23410b0?, 0xc001777b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x28fdf6?, 0x4957900?, 0xc001777bf8?, 0x28283b?, 0x1d51a8e0a28?, 0x35?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3c4, {0xc00087ade7?, 0x219, 0x3341df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001556a08?, {0xc00087ade7?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001556a08, {0xc00087ade7, 0x219, 0x219})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00079e028, {0xc00087ade7?, 0xc001c35dc0?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001568180, {0x34dae80, 0xc0000a6528})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34dafc0, 0xc001568180}, {0x34dae80, 0xc0000a6528}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001777e78?, {0x34dafc0, 0xc001568180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x485ac10?, {0x34dafc0?, 0xc001568180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34dafc0, 0xc001568180}, {0x34daf40, 0xc00079e028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001770000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2356 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000123080, 0xc001770420)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2289
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2289 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7fff77244e10?, {0xc00099dbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x648, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0005f6720)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000123080)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000123080)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0002ad040, 0xc000123080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0002ad040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0002ad040, 0xc0009640c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2115
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2337 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a06000, 0xc001c89260)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2334
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                    

Test pass (153/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.21
9 TestDownloadOnly/v1.20.0/DeleteAll 1.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.19
12 TestDownloadOnly/v1.30.2/json-events 11.97
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.2
18 TestDownloadOnly/v1.30.2/DeleteAll 1.28
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 1.24
21 TestDownloadOnly/v1.31.0-beta.0/json-events 18.66
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.31
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 1.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.13
30 TestBinaryMirror 6.75
31 TestOffline 257.95
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
36 TestAddons/Setup 441.27
39 TestAddons/parallel/Ingress 65.57
40 TestAddons/parallel/InspektorGadget 27.7
41 TestAddons/parallel/MetricsServer 22.37
42 TestAddons/parallel/HelmTiller 50.88
44 TestAddons/parallel/CSI 91.2
45 TestAddons/parallel/Headlamp 35.06
46 TestAddons/parallel/CloudSpanner 21.49
47 TestAddons/parallel/LocalPath 31.75
48 TestAddons/parallel/NvidiaDevicePlugin 21.79
49 TestAddons/parallel/Yakd 5.02
50 TestAddons/parallel/Volcano 79.41
53 TestAddons/serial/GCPAuth/Namespaces 0.34
54 TestAddons/StoppedEnableDisable 53.29
55 TestCertOptions 470.95
56 TestCertExpiration 924.32
57 TestDockerFlags 539.31
58 TestForceSystemdFlag 404.48
59 TestForceSystemdEnv 547.38
66 TestErrorSpam/start 16.8
67 TestErrorSpam/status 36.62
68 TestErrorSpam/pause 22.51
69 TestErrorSpam/unpause 22.65
70 TestErrorSpam/stop 61.39
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 238.37
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 128.19
77 TestFunctional/serial/KubeContext 0.12
78 TestFunctional/serial/KubectlGetPods 0.23
81 TestFunctional/serial/CacheCmd/cache/add_remote 25.95
82 TestFunctional/serial/CacheCmd/cache/add_local 11.09
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
84 TestFunctional/serial/CacheCmd/cache/list 0.18
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.13
86 TestFunctional/serial/CacheCmd/cache/cache_reload 35.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.37
88 TestFunctional/serial/MinikubeKubectlCmd 0.45
90 TestFunctional/serial/ExtraConfig 123.04
91 TestFunctional/serial/ComponentHealth 0.19
92 TestFunctional/serial/LogsCmd 8.54
93 TestFunctional/serial/LogsFileCmd 10.72
94 TestFunctional/serial/InvalidService 21.39
100 TestFunctional/parallel/StatusCmd 42.09
104 TestFunctional/parallel/ServiceCmdConnect 27.62
105 TestFunctional/parallel/AddonsCmd 0.57
106 TestFunctional/parallel/PersistentVolumeClaim 42.59
108 TestFunctional/parallel/SSHCmd 20.09
109 TestFunctional/parallel/CpCmd 58.14
110 TestFunctional/parallel/MySQL 67.92
111 TestFunctional/parallel/FileSync 9.81
112 TestFunctional/parallel/CertSync 61.15
116 TestFunctional/parallel/NodeLabels 0.2
118 TestFunctional/parallel/NonActiveRuntimeDisabled 10.25
120 TestFunctional/parallel/License 3.34
121 TestFunctional/parallel/ServiceCmd/DeployApp 16.44
122 TestFunctional/parallel/ProfileCmd/profile_not_create 11.36
123 TestFunctional/parallel/ProfileCmd/profile_list 10.77
124 TestFunctional/parallel/ServiceCmd/List 13.13
125 TestFunctional/parallel/ProfileCmd/profile_json_output 10.69
126 TestFunctional/parallel/ServiceCmd/JSONOutput 12.95
129 TestFunctional/parallel/Version/short 0.17
130 TestFunctional/parallel/Version/components 7.87
131 TestFunctional/parallel/ImageCommands/ImageListShort 7.25
132 TestFunctional/parallel/ImageCommands/ImageListTable 7.64
133 TestFunctional/parallel/ImageCommands/ImageListJson 7.46
134 TestFunctional/parallel/ImageCommands/ImageListYaml 7.43
135 TestFunctional/parallel/ImageCommands/ImageBuild 26.91
136 TestFunctional/parallel/ImageCommands/Setup 2.43
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 16.93
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 16.32
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 17.36
141 TestFunctional/parallel/DockerEnv/powershell 43.14
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.2
143 TestFunctional/parallel/ImageCommands/ImageRemove 15.73
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 15.89
145 TestFunctional/parallel/UpdateContextCmd/no_changes 2.65
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.46
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.45
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.35
150 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.92
151 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 47.98
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
160 TestFunctional/delete_echo-server_images 0.39
161 TestFunctional/delete_my-image_image 0.17
162 TestFunctional/delete_minikube_cached_images 0.19
170 TestMultiControlPlane/serial/NodeLabels 0.18
178 TestImageBuild/serial/Setup 199.46
179 TestImageBuild/serial/NormalBuild 9.96
180 TestImageBuild/serial/BuildWithBuildArg 9.01
181 TestImageBuild/serial/BuildWithDockerIgnore 7.85
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.6
186 TestJSONOutput/start/Command 212.26
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 7.94
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 7.95
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 39.92
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 1.34
214 TestMainNoArgs 0.17
215 TestMinikubeProfile 524.45
218 TestMountStart/serial/StartWithMountFirst 156.53
219 TestMountStart/serial/VerifyMountFirst 9.54
220 TestMountStart/serial/StartWithMountSecond 155.75
221 TestMountStart/serial/VerifyMountSecond 9.69
222 TestMountStart/serial/DeleteFirst 27.82
223 TestMountStart/serial/VerifyMountPostDelete 9.44
224 TestMountStart/serial/Stop 26.12
225 TestMountStart/serial/RestartStopped 117.51
226 TestMountStart/serial/VerifyMountPostStop 9.28
233 TestMultiNode/serial/MultiNodeLabels 0.22
234 TestMultiNode/serial/ProfileList 9.84
241 TestPreload 536.98
242 TestScheduledStopWindows 337.03
247 TestRunningBinaryUpgrade 1039.02
249 TestKubernetesUpgrade 1389.07
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.27
265 TestStoppedBinaryUpgrade/Setup 0.84
266 TestStoppedBinaryUpgrade/Upgrade 854.54
277 TestStoppedBinaryUpgrade/MinikubeLogs 9.46
x
+
TestDownloadOnly/v1.20.0/json-events (20.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-614900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-614900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (20.9529415s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-614900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-614900: exit status 85 (204.4569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |          |
	|         | -p download-only-614900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:05:30
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:05:30.064135    7404 out.go:291] Setting OutFile to fd 520 ...
	I0716 17:05:30.064381    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:05:30.064381    7404 out.go:304] Setting ErrFile to fd 564...
	I0716 17:05:30.064381    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0716 17:05:30.080053    7404 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0716 17:05:30.092557    7404 out.go:298] Setting JSON to true
	I0716 17:05:30.095241    7404 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16369,"bootTime":1721158360,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:05:30.095241    7404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:05:30.101889    7404 out.go:97] [download-only-614900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:05:30.102287    7404 notify.go:220] Checking for updates...
	W0716 17:05:30.102397    7404 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0716 17:05:30.105382    7404 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:05:30.109026    7404 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:05:30.111845    7404 out.go:169] MINIKUBE_LOCATION=19265
	I0716 17:05:30.115085    7404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0716 17:05:30.120130    7404 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0716 17:05:30.121633    7404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:05:35.442444    7404 out.go:97] Using the hyperv driver based on user configuration
	I0716 17:05:35.442444    7404 start.go:297] selected driver: hyperv
	I0716 17:05:35.442444    7404 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:05:35.443031    7404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:05:35.493232    7404 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0716 17:05:35.495005    7404 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0716 17:05:35.495005    7404 cni.go:84] Creating CNI manager for ""
	I0716 17:05:35.495005    7404 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0716 17:05:35.495474    7404 start.go:340] cluster config:
	{Name:download-only-614900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-614900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:05:35.496001    7404 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:05:35.499175    7404 out.go:97] Downloading VM boot image ...
	I0716 17:05:35.499175    7404 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1721037971-19249-amd64.iso
	I0716 17:05:40.312536    7404 out.go:97] Starting "download-only-614900" primary control-plane node in "download-only-614900" cluster
	I0716 17:05:40.312536    7404 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0716 17:05:40.371376    7404 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0716 17:05:40.371743    7404 cache.go:56] Caching tarball of preloaded images
	I0716 17:05:40.372129    7404 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0716 17:05:40.375760    7404 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0716 17:05:40.375861    7404 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:05:40.481838    7404 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0716 17:05:44.739360    7404 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:05:44.740359    7404 preload.go:254] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:05:45.738806    7404 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0716 17:05:45.739919    7404 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-614900\config.json ...
	I0716 17:05:45.740833    7404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-614900\config.json: {Name:mk61625d5745bfe0da4d6e7faef3ca81cd3e7758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:05:45.741862    7404 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0716 17:05:45.743861    7404 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-614900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-614900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:05:51.011422    5508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1172378s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-614900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-614900: (1.1863932s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (11.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-914500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-914500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv: (11.9688259s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (11.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-914500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-914500: exit status 85 (198.3235ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-614900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| delete  | -p download-only-614900        | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| start   | -o=json --download-only        | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-914500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:05:53
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:05:53.525746    3596 out.go:291] Setting OutFile to fd 708 ...
	I0716 17:05:53.526439    3596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:05:53.526439    3596 out.go:304] Setting ErrFile to fd 712...
	I0716 17:05:53.526439    3596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:05:53.550448    3596 out.go:298] Setting JSON to true
	I0716 17:05:53.553446    3596 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16392,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:05:53.553446    3596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:05:53.558447    3596 out.go:97] [download-only-914500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:05:53.559445    3596 notify.go:220] Checking for updates...
	I0716 17:05:53.562480    3596 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:05:53.565716    3596 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:05:53.568554    3596 out.go:169] MINIKUBE_LOCATION=19265
	I0716 17:05:53.571036    3596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0716 17:05:53.576555    3596 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0716 17:05:53.576555    3596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:05:58.794931    3596 out.go:97] Using the hyperv driver based on user configuration
	I0716 17:05:58.795049    3596 start.go:297] selected driver: hyperv
	I0716 17:05:58.795049    3596 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:05:58.795384    3596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:05:58.845801    3596 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0716 17:05:58.845801    3596 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0716 17:05:58.845801    3596 cni.go:84] Creating CNI manager for ""
	I0716 17:05:58.845801    3596 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:05:58.846895    3596 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0716 17:05:58.846961    3596 start.go:340] cluster config:
	{Name:download-only-914500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-914500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0716 17:05:58.846961    3596 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:05:58.850203    3596 out.go:97] Starting "download-only-914500" primary control-plane node in "download-only-914500" cluster
	I0716 17:05:58.850203    3596 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:05:58.908214    3596 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:05:58.908214    3596 cache.go:56] Caching tarball of preloaded images
	I0716 17:05:58.909552    3596 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0716 17:05:58.912783    3596 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0716 17:05:58.912905    3596 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:05:59.024327    3596 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0716 17:06:03.208096    3596 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:06:03.209190    3596 preload.go:254] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-914500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-914500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:06:05.498086    1972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2774996s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-914500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-914500: (1.2401165s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (18.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-359900 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-359900 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv: (18.655024s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (18.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-359900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-359900: exit status 85 (307.746ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-614900             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| delete  | -p download-only-614900             | download-only-614900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT | 16 Jul 24 17:05 PDT |
	| start   | -o=json --download-only             | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:05 PDT |                     |
	|         | -p download-only-914500             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| delete  | -p download-only-914500             | download-only-914500 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT | 16 Jul 24 17:06 PDT |
	| start   | -o=json --download-only             | download-only-359900 | minikube1\jenkins | v1.33.1 | 16 Jul 24 17:06 PDT |                     |
	|         | -p download-only-359900             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/16 17:06:08
	Running on machine: minikube1
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0716 17:06:08.217769    3144 out.go:291] Setting OutFile to fd 748 ...
	I0716 17:06:08.218358    3144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:06:08.218358    3144 out.go:304] Setting ErrFile to fd 700...
	I0716 17:06:08.218358    3144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:06:08.242655    3144 out.go:298] Setting JSON to true
	I0716 17:06:08.245428    3144 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16407,"bootTime":1721158360,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:06:08.245428    3144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:06:08.251263    3144 out.go:97] [download-only-359900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:06:08.251263    3144 notify.go:220] Checking for updates...
	I0716 17:06:08.255103    3144 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:06:08.258175    3144 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:06:08.260770    3144 out.go:169] MINIKUBE_LOCATION=19265
	I0716 17:06:08.264040    3144 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0716 17:06:08.269357    3144 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0716 17:06:08.270413    3144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0716 17:06:13.498740    3144 out.go:97] Using the hyperv driver based on user configuration
	I0716 17:06:13.499359    3144 start.go:297] selected driver: hyperv
	I0716 17:06:13.499359    3144 start.go:901] validating driver "hyperv" against <nil>
	I0716 17:06:13.499359    3144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0716 17:06:13.547871    3144 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0716 17:06:13.549270    3144 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0716 17:06:13.549471    3144 cni.go:84] Creating CNI manager for ""
	I0716 17:06:13.549471    3144 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0716 17:06:13.549471    3144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0716 17:06:13.549471    3144 start.go:340] cluster config:
	{Name:download-only-359900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-359900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0716 17:06:13.550080    3144 iso.go:125] acquiring lock: {Name:mk7efd1ee2fb6a809a697dbf054fb88e9458c66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0716 17:06:13.553721    3144 out.go:97] Starting "download-only-359900" primary control-plane node in "download-only-359900" cluster
	I0716 17:06:13.553721    3144 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0716 17:06:13.642248    3144 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0716 17:06:13.642248    3144 cache.go:56] Caching tarball of preloaded images
	I0716 17:06:13.642787    3144 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0716 17:06:13.648090    3144 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0716 17:06:13.648185    3144 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:06:13.757747    3144 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0716 17:06:19.098527    3144 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:06:19.099546    3144 preload.go:254] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0716 17:06:19.942704    3144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0716 17:06:19.943879    3144 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-359900\config.json ...
	I0716 17:06:19.944522    3144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-359900\config.json: {Name:mk6d32c597d488130c8720f7bce509d57187adf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0716 17:06:19.946203    3144 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0716 17:06:19.947122    3144 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.31.0-beta.0/kubectl.exe
	
	
	* The control-plane node download-only-359900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-359900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:06:26.868501    3172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.107855s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-359900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-359900: (1.1251557s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.13s)

                                                
                                    
x
+
TestBinaryMirror (6.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-508700 --alsologtostderr --binary-mirror http://127.0.0.1:64079 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-508700 --alsologtostderr --binary-mirror http://127.0.0.1:64079 --driver=hyperv: (5.9455039s)
helpers_test.go:175: Cleaning up "binary-mirror-508700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-508700
--- PASS: TestBinaryMirror (6.75s)

                                                
                                    
x
+
TestOffline (257.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-643200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-643200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m36.4076172s)
helpers_test.go:175: Cleaning up "offline-docker-643200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-643200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-643200: (41.5449471s)
--- PASS: TestOffline (257.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-933500
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-933500: exit status 85 (206.2522ms)

                                                
                                                
-- stdout --
	* Profile "addons-933500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-933500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:06:39.401241    5644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-933500
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-933500: exit status 85 (180.8196ms)

                                                
                                                
-- stdout --
	* Profile "addons-933500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-933500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:06:39.402217   13484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (441.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-933500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-933500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m21.2727616s)
--- PASS: TestAddons/Setup (441.27s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-933500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-933500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-933500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f752f4a9-3045-44e0-9375-1c1754393f5c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f752f4a9-3045-44e0-9375-1c1754393f5c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0126151s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.964831s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-933500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0716 17:15:57.704603   13492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-933500 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 ip: (2.3732082s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.27.174.219
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable ingress-dns --alsologtostderr -v=1: (15.6536909s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable ingress --alsologtostderr -v=1: (21.6880889s)
--- PASS: TestAddons/parallel/Ingress (65.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fwwtf" [acaa1a79-284d-419b-8129-14fa06df8701] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0202733s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-933500
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-933500: (22.6722245s)
--- PASS: TestAddons/parallel/InspektorGadget (27.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.3049ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-2dk8v" [63f4a937-d36e-49c0-8d77-8d7e2a836757] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0188762s
addons_test.go:417: (dbg) Run:  kubectl --context addons-933500 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable metrics-server --alsologtostderr -v=1: (17.0900312s)
--- PASS: TestAddons/parallel/MetricsServer (22.37s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (50.88s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.2219ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-7npsr" [5579b661-bacb-48f9-9587-7eed216c2535] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0146869s
addons_test.go:475: (dbg) Run:  kubectl --context addons-933500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-933500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (24.9314985s)
addons_test.go:480: kubectl --context addons-933500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-933500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-933500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.92414s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable helm-tiller --alsologtostderr -v=1: (16.535119s)
--- PASS: TestAddons/parallel/HelmTiller (50.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 15.0016ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-933500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-933500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [693b7251-5dc2-4afd-b735-46276e8baa38] Pending
helpers_test.go:344: "task-pv-pod" [693b7251-5dc2-4afd-b735-46276e8baa38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [693b7251-5dc2-4afd-b735-46276e8baa38] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.0079812s
addons_test.go:586: (dbg) Run:  kubectl --context addons-933500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-933500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-933500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-933500 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-933500 delete pod task-pv-pod: (1.4455108s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-933500 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-933500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-933500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ca5257f6-bf8a-45e6-824a-e5797796b386] Pending
helpers_test.go:344: "task-pv-pod-restore" [ca5257f6-bf8a-45e6-824a-e5797796b386] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ca5257f6-bf8a-45e6-824a-e5797796b386] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0134075s
addons_test.go:628: (dbg) Run:  kubectl --context addons-933500 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-933500 delete pod task-pv-pod-restore: (1.7348651s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-933500 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-933500 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.689739s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable volumesnapshots --alsologtostderr -v=1: (15.8856943s)
--- PASS: TestAddons/parallel/CSI (91.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-933500 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-933500 --alsologtostderr -v=1: (16.0481628s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-nsdnt" [a2f3678d-4a22-411c-b26b-c9c8560c16eb] Pending
helpers_test.go:344: "headlamp-7867546754-nsdnt" [a2f3678d-4a22-411c-b26b-c9c8560c16eb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-nsdnt" [a2f3678d-4a22-411c-b26b-c9c8560c16eb] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0112032s
--- PASS: TestAddons/parallel/Headlamp (35.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-4j5wd" [db13a7f0-90ee-465f-b734-3c1bf745d7d2] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0155971s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-933500
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-933500: (15.4539944s)
--- PASS: TestAddons/parallel/CloudSpanner (21.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (31.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-933500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-933500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [de5ffde7-6098-4e8e-8170-575b86bf6491] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [de5ffde7-6098-4e8e-8170-575b86bf6491] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [de5ffde7-6098-4e8e-8170-575b86bf6491] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0103542s
addons_test.go:992: (dbg) Run:  kubectl --context addons-933500 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 ssh "cat /opt/local-path-provisioner/pvc-e1082ac8-7537-477b-b3d0-c3926ece17e9_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 ssh "cat /opt/local-path-provisioner/pvc-e1082ac8-7537-477b-b3d0-c3926ece17e9_default_test-pvc/file1": (10.9367814s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-933500 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-933500 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1953366s)
--- PASS: TestAddons/parallel/LocalPath (31.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.79s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wdv2m" [4b9c108e-7da6-4ffc-a3ca-74423d63615e] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0183864s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-933500
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-933500: (15.7564778s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.79s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-pshxt" [7c37bad2-b721-4e3b-bf4a-826e2344c87c] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.014583s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (79.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 31.475ms
addons_test.go:897: volcano-admission stabilized in 31.475ms
addons_test.go:905: volcano-controller stabilized in 31.475ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-2dl4x" [3105afec-458d-4538-a9bf-ef202e813922] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0156756s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-r5vwd" [c1e0d7e4-d634-4760-a277-09bd32024f8f] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.0151424s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-grcdb" [e6d9382c-650b-42a4-a413-135bc68a4eaa] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0448678s
addons_test.go:924: (dbg) Run:  kubectl --context addons-933500 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-933500 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-933500 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4e370895-76df-4f71-8759-bd4e8daf5d1a] Pending
helpers_test.go:344: "test-job-nginx-0" [4e370895-76df-4f71-8759-bd4e8daf5d1a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4e370895-76df-4f71-8759-bd4e8daf5d1a] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 36.0100644s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933500 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933500 addons disable volcano --alsologtostderr -v=1: (26.3608233s)
--- PASS: TestAddons/parallel/Volcano (79.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-933500 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-933500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (53.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-933500
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-933500: (41.1375516s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-933500
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-933500: (4.9268244s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-933500
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-933500: (4.6330558s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-933500
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-933500: (2.5884713s)
--- PASS: TestAddons/StoppedEnableDisable (53.29s)

                                                
                                    
x
+
TestCertOptions (470.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-391000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-391000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m48.8669313s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-391000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-391000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.8919816s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-391000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-391000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-391000 -- "sudo cat /etc/kubernetes/admin.conf": (9.7206127s)
helpers_test.go:175: Cleaning up "cert-options-391000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-391000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-391000: (42.3181876s)
--- PASS: TestCertOptions (470.95s)

                                                
                                    
x
+
TestCertExpiration (924.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-841700 --memory=2048 --cert-expiration=3m --driver=hyperv
E0716 19:37:04.089216    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-841700 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m26.5579266s)
E0716 19:44:00.826981    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-841700 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-841700 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m10.5904077s)
helpers_test.go:175: Cleaning up "cert-expiration-841700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-841700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-841700: (47.1605694s)
--- PASS: TestCertExpiration (924.32s)
E0716 19:53:44.107402    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:54:00.820439    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestDockerFlags (539.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-158900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-158900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (7m50.7829506s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-158900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-158900 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.4432169s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-158900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-158900 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.5185419s)
helpers_test.go:175: Cleaning up "docker-flags-158900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-158900
E0716 19:46:05.805378    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-158900: (47.5606869s)
--- PASS: TestDockerFlags (539.31s)

                                                
                                    
x
+
TestForceSystemdFlag (404.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-818200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-818200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m46.9266108s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-818200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-818200 ssh "docker info --format {{.CgroupDriver}}": (10.1217139s)
helpers_test.go:175: Cleaning up "force-systemd-flag-818200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-818200
E0716 19:39:00.830825    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-818200: (47.4305868s)
--- PASS: TestForceSystemdFlag (404.48s)

                                                
                                    
x
+
TestForceSystemdEnv (547.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-592800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0716 19:34:00.829649    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:36:05.810823    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-592800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (8m10.475631s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-592800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-592800 ssh "docker info --format {{.CgroupDriver}}": (10.1396619s)
helpers_test.go:175: Cleaning up "force-systemd-env-592800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-592800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-592800: (46.7589307s)
--- PASS: TestForceSystemdEnv (547.38s)

                                                
                                    
x
+
TestErrorSpam/start (16.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run: (5.6155317s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run: (5.6414889s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 start --dry-run: (5.5427871s)
--- PASS: TestErrorSpam/start (16.80s)

                                                
                                    
x
+
TestErrorSpam/status (36.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status: (12.6166198s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status: (12.0503278s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 status: (11.9495572s)
--- PASS: TestErrorSpam/status (36.62s)

                                                
                                    
x
+
TestErrorSpam/pause (22.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause: (7.7300903s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause: (7.4407536s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 pause: (7.3352797s)
--- PASS: TestErrorSpam/pause (22.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause: (7.5980296s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause: (7.5520302s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 unpause: (7.4953211s)
--- PASS: TestErrorSpam/unpause (22.65s)

                                                
                                    
x
+
TestErrorSpam/stop (61.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop
E0716 17:24:00.787505    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:00.802730    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:00.818111    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:00.849202    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:00.895813    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:00.989938    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:01.162898    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:01.494276    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:02.147670    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:03.433040    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:06.006049    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:11.141790    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:24:21.389592    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop: (39.5944342s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop
E0716 17:24:41.880074    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop: (11.2111377s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-153600 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-153600 stop: (10.5834647s)
--- PASS: TestErrorSpam/stop (61.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4740\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (238.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0716 17:25:22.854603    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:26:44.788879    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 17:29:00.795487    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m58.3615633s)
--- PASS: TestFunctional/serial/StartWithProxy (238.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (128.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804300 --alsologtostderr -v=8
E0716 17:29:28.634350    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804300 --alsologtostderr -v=8: (2m8.185817s)
functional_test.go:659: soft start took 2m8.188173s for "functional-804300" cluster.
--- PASS: TestFunctional/serial/SoftStart (128.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-804300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:3.1: (8.7315351s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:3.3: (8.6816795s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cache add registry.k8s.io/pause:latest: (8.5332744s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-804300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1630608858\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-804300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1630608858\001: (2.4126406s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache add minikube-local-cache-test:functional-804300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cache add minikube-local-cache-test:functional-804300: (8.2414515s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache delete minikube-local-cache-test:functional-804300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-804300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl images: (9.125586s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.1779609s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.31976s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:32:21.234262    2948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cache reload: (7.9584294s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.1581552s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 kubectl -- --context functional-804300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (123.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0716 17:34:00.797601    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m3.0401222s)
functional_test.go:757: restart took 2m3.0408598s for "functional-804300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (123.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-804300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 logs: (8.5360972s)
--- PASS: TestFunctional/serial/LogsCmd (8.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1421546007\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1421546007\001\logs.txt: (10.7143756s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-804300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-804300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-804300: exit status 115 (16.6603362s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.27.170.236:31467 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:35:47.899557    8856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-804300 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-804300 delete -f testdata\invalidsvc.yaml: (1.3043773s)
--- PASS: TestFunctional/serial/InvalidService (21.39s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 status: (13.4010881s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.3736547s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 status -o json: (13.3125004s)
--- PASS: TestFunctional/parallel/StatusCmd (42.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-804300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-804300 expose deployment hello-node-connect --type=NodePort --port=8080
E0716 17:39:00.801204    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
functional_test.go:1631: (dbg) Done: kubectl --context functional-804300 expose deployment hello-node-connect --type=NodePort --port=8080: (1.5435092s)
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-ltmmx" [415bb7d1-20c2-4f87-be5b-c8c7fcda15b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-ltmmx" [415bb7d1-20c2-4f87-be5b-c8c7fcda15b4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0187519s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 service hello-node-connect --url: (16.8524102s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.27.170.236:31822
functional_test.go:1671: http://172.27.170.236:31822: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-ltmmx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.27.170.236:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.27.170.236:31822
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c846d719-af54-492f-8e1a-b4bb2a912d7f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0147446s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-804300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-804300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-804300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [858f6282-39c6-4d06-96eb-a40802979be1] Pending
helpers_test.go:344: "sp-pod" [858f6282-39c6-4d06-96eb-a40802979be1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [858f6282-39c6-4d06-96eb-a40802979be1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0106903s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-804300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-804300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-804300 delete -f testdata/storage-provisioner/pod.yaml: (1.5626019s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2f0ebcdd-db79-40a8-a3ab-b0becebcec0f] Pending
helpers_test.go:344: "sp-pod" [2f0ebcdd-db79-40a8-a3ab-b0becebcec0f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2f0ebcdd-db79-40a8-a3ab-b0becebcec0f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0198303s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-804300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "echo hello": (9.8632005s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "cat /etc/hostname": (10.2239591s)
--- PASS: TestFunctional/parallel/SSHCmd (20.09s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (58.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.071711s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /home/docker/cp-test.txt": (10.2142739s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cp functional-804300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2675498015\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cp functional-804300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2675498015\001\cp-test.txt: (10.298121s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /home/docker/cp-test.txt": (10.0971703s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0625473s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh -n functional-804300 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.3882475s)
--- PASS: TestFunctional/parallel/CpCmd (58.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-804300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-vvxrr" [e382fb22-e7af-4e42-a882-a155ad85cb1c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-vvxrr" [e382fb22-e7af-4e42-a882-a155ad85cb1c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.0181208s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;": exit status 1 (299.1584ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;": exit status 1 (285.0651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;": exit status 1 (307.1978ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;": exit status 1 (314.1362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;": exit status 1 (309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-804300 exec mysql-64454c8b5c-vvxrr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/4740/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/test/nested/copy/4740/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/test/nested/copy/4740/hosts": (9.805723s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (61.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/4740.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/4740.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/4740.pem": (10.2687159s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/4740.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /usr/share/ca-certificates/4740.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /usr/share/ca-certificates/4740.pem": (10.1660232s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.9412963s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/47402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/47402.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/47402.pem": (10.2825266s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/47402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /usr/share/ca-certificates/47402.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /usr/share/ca-certificates/47402.pem": (10.2194352s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.2665606s)
--- PASS: TestFunctional/parallel/CertSync (61.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-804300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 ssh "sudo systemctl is-active crio": exit status 1 (10.2547402s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:37:07.386192   14720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.322148s)
--- PASS: TestFunctional/parallel/License (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-804300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-804300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-s6qs6" [d1ee89ed-4ee8-469b-8625-e6ca0b336dc8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-s6qs6" [d1ee89ed-4ee8-469b-8625-e6ca0b336dc8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0202981s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.9921972s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.5729475s)
functional_test.go:1311: Took "10.5729475s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "198.012ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 service list: (13.1302993s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.5182495s)
functional_test.go:1362: Took "10.5189098s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "169.8202ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (12.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 service list -o json: (12.9481998s)
functional_test.go:1490: Took "12.9483709s" to run "out/minikube-windows-amd64.exe -p functional-804300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (12.95s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 version -o=json --components: (7.8695012s)
--- PASS: TestFunctional/parallel/Version/components (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls --format short --alsologtostderr: (7.2491569s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-804300
docker.io/kicbase/echo-server:functional-804300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804300 image ls --format short --alsologtostderr:
W0716 17:39:43.027030    9428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 17:39:43.035029    9428 out.go:291] Setting OutFile to fd 520 ...
I0716 17:39:43.035029    9428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:43.035029    9428 out.go:304] Setting ErrFile to fd 1016...
I0716 17:39:43.036026    9428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:43.052024    9428 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:43.053044    9428 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:43.054025    9428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:39:45.301585    9428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:39:45.301688    9428 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:45.318118    9428 ssh_runner.go:195] Run: systemctl --version
I0716 17:39:45.318118    9428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:39:47.418522    9428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:39:47.418522    9428 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:47.419134    9428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
I0716 17:39:49.980025    9428 main.go:141] libmachine: [stdout =====>] : 172.27.170.236

                                                
                                                
I0716 17:39:49.980830    9428 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:49.981205    9428 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
I0716 17:39:50.089368    9428 ssh_runner.go:235] Completed: systemctl --version: (4.7711267s)
I0716 17:39:50.100621    9428 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls --format table --alsologtostderr: (7.6431443s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-804300 | 312946b4ec2f5 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-804300 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804300 image ls --format table --alsologtostderr:
W0716 17:40:01.618022    6780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 17:40:01.625997    6780 out.go:291] Setting OutFile to fd 624 ...
I0716 17:40:01.642138    6780 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:40:01.642138    6780 out.go:304] Setting ErrFile to fd 844...
I0716 17:40:01.642138    6780 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:40:01.660323    6780 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:40:01.660764    6780 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:40:01.661321    6780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:40:03.952213    6780 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:40:03.952729    6780 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:03.966219    6780 ssh_runner.go:195] Run: systemctl --version
I0716 17:40:03.966219    6780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:40:06.243188    6780 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:40:06.243848    6780 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:06.244160    6780 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
I0716 17:40:08.932386    6780 main.go:141] libmachine: [stdout =====>] : 172.27.170.236

                                                
                                                
I0716 17:40:08.933380    6780 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:08.933589    6780 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
I0716 17:40:09.048646    6780 ssh_runner.go:235] Completed: systemctl --version: (5.0824069s)
I0716 17:40:09.064314    6780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls --format json --alsologtostderr: (7.4577793s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804300 image ls --format json --alsologtostderr:
[{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a
47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-804300"],"size":"4940000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause
:latest"],"size":"240000"},{"id":"312946b4ec2f502a000afbd3cd126c54660224f323510b0a0f6649af160701aa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-804300"],"size":"30"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804300 image ls --format json --alsologtostderr:
W0716 17:39:57.376387    9476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 17:39:57.383402    9476 out.go:291] Setting OutFile to fd 612 ...
I0716 17:39:57.383402    9476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:57.383402    9476 out.go:304] Setting ErrFile to fd 564...
I0716 17:39:57.384423    9476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:57.398384    9476 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:57.399405    9476 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:57.400401    9476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:39:59.623149    9476 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:39:59.623149    9476 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:59.634845    9476 ssh_runner.go:195] Run: systemctl --version
I0716 17:39:59.634845    9476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:40:01.839517    9476 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:40:01.839636    9476 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:01.839636    9476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
I0716 17:40:04.549502    9476 main.go:141] libmachine: [stdout =====>] : 172.27.170.236

                                                
                                                
I0716 17:40:04.549502    9476 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:04.549502    9476 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
I0716 17:40:04.655675    9476 ssh_runner.go:235] Completed: systemctl --version: (5.0208098s)
I0716 17:40:04.665675    9476 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls --format yaml --alsologtostderr: (7.4251812s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804300 image ls --format yaml --alsologtostderr:
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-804300
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 312946b4ec2f502a000afbd3cd126c54660224f323510b0a0f6649af160701aa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-804300
size: "30"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804300 image ls --format yaml --alsologtostderr:
W0716 17:39:49.955025    9612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 17:39:49.963019    9612 out.go:291] Setting OutFile to fd 944 ...
I0716 17:39:49.981205    9612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:49.981205    9612 out.go:304] Setting ErrFile to fd 844...
I0716 17:39:49.981205    9612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:49.999800    9612 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:50.000534    9612 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:50.001413    9612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:39:52.215859    9612 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:39:52.215924    9612 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:52.231694    9612 ssh_runner.go:195] Run: systemctl --version
I0716 17:39:52.231694    9612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:39:54.419092    9612 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:39:54.419376    9612 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:54.419376    9612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
I0716 17:39:57.090109    9612 main.go:141] libmachine: [stdout =====>] : 172.27.170.236

                                                
                                                
I0716 17:39:57.090109    9612 main.go:141] libmachine: [stderr =====>] : 
I0716 17:39:57.090874    9612 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
I0716 17:39:57.197281    9612 ssh_runner.go:235] Completed: systemctl --version: (4.965567s)
I0716 17:39:57.209668    9612 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (26.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804300 ssh pgrep buildkitd: exit status 1 (9.5427909s)

                                                
                                                
** stderr ** 
	W0716 17:39:50.283538    7188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image build -t localhost/my-image:functional-804300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image build -t localhost/my-image:functional-804300 testdata\build --alsologtostderr: (10.2221937s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804300 image build -t localhost/my-image:functional-804300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in a3386d0ae96a
---> Removed intermediate container a3386d0ae96a
---> 52268bdfbb97
Step 3/3 : ADD content.txt /
---> 66aa4b0fe711
Successfully built 66aa4b0fe711
Successfully tagged localhost/my-image:functional-804300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804300 image build -t localhost/my-image:functional-804300 testdata\build --alsologtostderr:
W0716 17:39:59.820097    9380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0716 17:39:59.827119    9380 out.go:291] Setting OutFile to fd 624 ...
I0716 17:39:59.846089    9380 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:59.846089    9380 out.go:304] Setting ErrFile to fd 844...
I0716 17:39:59.846089    9380 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:39:59.870432    9380 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:59.889222    9380 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:39:59.890262    9380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:40:02.108048    9380 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:40:02.108244    9380 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:02.123086    9380 ssh_runner.go:195] Run: systemctl --version
I0716 17:40:02.124118    9380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-804300 ).state
I0716 17:40:04.430344    9380 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0716 17:40:04.430344    9380 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:04.430344    9380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-804300 ).networkadapters[0]).ipaddresses[0]
I0716 17:40:06.956330    9380 main.go:141] libmachine: [stdout =====>] : 172.27.170.236

                                                
                                                
I0716 17:40:06.956552    9380 main.go:141] libmachine: [stderr =====>] : 
I0716 17:40:06.956620    9380 sshutil.go:53] new ssh client: &{IP:172.27.170.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-804300\id_rsa Username:docker}
I0716 17:40:07.063103    9380 ssh_runner.go:235] Completed: systemctl --version: (4.939917s)
I0716 17:40:07.063204    9380 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2496960191.tar
I0716 17:40:07.077547    9380 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0716 17:40:07.108546    9380 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2496960191.tar
I0716 17:40:07.117752    9380 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2496960191.tar: stat -c "%s %y" /var/lib/minikube/build/build.2496960191.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2496960191.tar': No such file or directory
I0716 17:40:07.117752    9380 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2496960191.tar --> /var/lib/minikube/build/build.2496960191.tar (3072 bytes)
I0716 17:40:07.175894    9380 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2496960191
I0716 17:40:07.205847    9380 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2496960191 -xf /var/lib/minikube/build/build.2496960191.tar
I0716 17:40:07.223863    9380 docker.go:360] Building image: /var/lib/minikube/build/build.2496960191
I0716 17:40:07.231866    9380 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-804300 /var/lib/minikube/build/build.2496960191
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0716 17:40:09.829695    9380 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-804300 /var/lib/minikube/build/build.2496960191: (2.5978186s)
I0716 17:40:09.842371    9380 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2496960191
I0716 17:40:09.882825    9380 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2496960191.tar
I0716 17:40:09.904829    9380 build_images.go:217] Built localhost/my-image:functional-804300 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2496960191.tar
I0716 17:40:09.904829    9380 build_images.go:133] succeeded building to: functional-804300
I0716 17:40:09.904829    9380 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.1448041s)
E0716 17:40:23.999155    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (26.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.1956345s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-804300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr: (9.0770665s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.8538418s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr: (8.3404311s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.9767101s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.0562015s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-804300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image load --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr: (8.4896464s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.5647365s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (43.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-804300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-804300": (28.3664312s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804300 docker-env | Invoke-Expression ; docker images": (14.7487646s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (43.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image save docker.io/kicbase/echo-server:functional-804300 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image save docker.io/kicbase/echo-server:functional-804300 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.2000448s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image rm docker.io/kicbase/echo-server:functional-804300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image rm docker.io/kicbase/echo-server:functional-804300 --alsologtostderr: (7.8717999s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.8584358s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.3296034s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image ls: (7.5559626s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2: (2.6489191s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2: (2.4558533s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 update-context --alsologtostderr -v=2: (2.4437869s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-804300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804300 image save --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804300 image save --daemon docker.io/kicbase/echo-server:functional-804300 --alsologtostderr: (8.9737401s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-804300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4968: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13420: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (47.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-804300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b5ccb3de-ac6e-4aef-934e-4e542eb90dc0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b5ccb3de-ac6e-4aef-934e-4e542eb90dc0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 47.0130606s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (47.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9664: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.39s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-804300
--- PASS: TestFunctional/delete_echo-server_images (0.39s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-804300
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-804300
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-339000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (199.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-211900 --driver=hyperv
E0716 18:19:00.799673    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:19:08.974409    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-211900 --driver=hyperv: (3m19.4578823s)
--- PASS: TestImageBuild/serial/Setup (199.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-211900
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-211900: (9.956817s)
--- PASS: TestImageBuild/serial/NormalBuild (9.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-211900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-211900: (9.0136771s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-211900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-211900: (7.844057s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-211900
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-211900: (7.6041022s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (212.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-728600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0716 18:24:00.803283    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-728600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m32.2601648s)
--- PASS: TestJSONOutput/start/Command (212.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-728600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-728600 --output=json --user=testUser: (7.9355872s)
--- PASS: TestJSONOutput/pause/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-728600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-728600 --output=json --user=testUser: (7.9480093s)
--- PASS: TestJSONOutput/unpause/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-728600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-728600 --output=json --user=testUser: (39.922267s)
--- PASS: TestJSONOutput/stop/Command (39.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-838300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-838300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (208.5336ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a84103a8-95d3-4198-9c7f-e2a73ec70beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-838300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"38e1c252-2e41-4737-a5ef-dd93ac13fed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"31f97e26-cae1-4041-83ed-2ef7a710da54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9565fc64-7b58-4a64-9b5b-dfa0571701d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"481e74eb-ee5f-4f8e-891e-83167106f29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"141129bf-24a6-4eb3-ae15-4d5b4d8d4a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58a8670e-9d3c-4006-8adb-07fa057a5757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 18:26:20.547116   11736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-838300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-838300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-838300: (1.132011s)
--- PASS: TestErrorJSONOutput (1.34s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (524.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-471700 --driver=hyperv
E0716 18:29:00.802536    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-471700 --driver=hyperv: (3m16.5265713s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-471700 --driver=hyperv
E0716 18:30:24.037020    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 18:31:05.792107    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-471700 --driver=hyperv: (3m20.6716592s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-471700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.5215094s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-471700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.551639s)
helpers_test.go:175: Cleaning up "second-471700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-471700
E0716 18:34:00.814196    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-471700: (46.2511537s)
helpers_test.go:175: Cleaning up "first-471700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-471700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-471700: (41.159819s)
--- PASS: TestMinikubeProfile (524.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (156.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-477500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0716 18:35:48.992084    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 18:36:05.799624    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-477500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.5152958s)
--- PASS: TestMountStart/serial/StartWithMountFirst (156.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-477500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-477500 ssh -- ls /minikube-host: (9.5353713s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (155.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-477500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0716 18:39:00.806606    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-477500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.7425742s)
--- PASS: TestMountStart/serial/StartWithMountSecond (155.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host: (9.6850514s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.69s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (27.82s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-477500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-477500 --alsologtostderr -v=5: (27.8158452s)
--- PASS: TestMountStart/serial/DeleteFirst (27.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host
E0716 18:41:05.803532    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host: (9.440207s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-477500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-477500: (26.1206551s)
--- PASS: TestMountStart/serial/Stop (26.12s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (117.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-477500
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-477500: (1m56.5054724s)
--- PASS: TestMountStart/serial/RestartStopped (117.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-477500 ssh -- ls /minikube-host: (9.2766691s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-343600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.8381453s)
--- PASS: TestMultiNode/serial/ProfileList (9.84s)

                                                
                                    
x
+
TestPreload (536.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-293500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0716 19:19:00.820578    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:20:24.077226    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:21:05.811129    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-293500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m33.8259121s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-293500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-293500 image pull gcr.io/k8s-minikube/busybox: (8.8255648s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-293500
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-293500: (40.8883373s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-293500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0716 19:24:00.825332    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:25:49.019020    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
E0716 19:26:05.805327    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-293500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m42.280912s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-293500 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-293500 image list: (7.4171683s)
helpers_test.go:175: Cleaning up "test-preload-293500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-293500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-293500: (43.7409848s)
--- PASS: TestPreload (536.98s)

                                                
                                    
x
+
TestScheduledStopWindows (337.03s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-332300 --memory=2048 --driver=hyperv
E0716 19:29:00.826276    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-332300 --memory=2048 --driver=hyperv: (3m23.8863848s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-332300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-332300 --schedule 5m: (10.7990421s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-332300 -n scheduled-stop-332300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-332300 -n scheduled-stop-332300: exit status 1 (10.0211598s)

                                                
                                                
** stderr ** 
	W0716 19:30:41.853628   12692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-332300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-332300 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.8347585s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-332300 --schedule 5s
E0716 19:31:05.800967    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-332300 --schedule 5s: (10.7968598s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-332300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-332300: exit status 7 (2.3060937s)

                                                
                                                
-- stdout --
	scheduled-stop-332300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:32:12.517884    8160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-332300 -n scheduled-stop-332300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-332300 -n scheduled-stop-332300: exit status 7 (2.3277028s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:32:14.826107   13476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-332300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-332300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-332300: (27.0356478s)
--- PASS: TestScheduledStopWindows (337.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1039.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1330032543.exe start -p running-upgrade-777700 --memory=2200 --vm-driver=hyperv
E0716 19:41:05.805471    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1330032543.exe start -p running-upgrade-777700 --memory=2200 --vm-driver=hyperv: (8m46.5443175s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-777700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0716 19:49:00.826332    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-777700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m18.3521162s)
helpers_test.go:175: Cleaning up "running-upgrade-777700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-777700
E0716 19:56:05.805961    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-777700: (1m12.9744596s)
--- PASS: TestRunningBinaryUpgrade (1039.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (1389.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
E0716 19:42:29.031280    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (8m36.0762573s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-487700
E0716 19:51:05.808966    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-487700: (37.7619817s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-487700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-487700 status --format={{.Host}}: exit status 7 (2.5760174s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:51:17.783495    9956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperv: (5m39.9145475s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-487700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (215.903ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-487700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:57:00.448618    8308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-487700
	    minikube start -p kubernetes-upgrade-487700 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4877002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-487700 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperv
E0716 19:59:00.825684    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-933500\client.crt: The system cannot find the path specified.
E0716 19:59:09.038532    4740 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-804300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-487700 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperv: (7m26.0730265s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-487700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-487700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-487700: (46.2883951s)
--- PASS: TestKubernetesUpgrade (1389.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-673300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-673300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (272.6136ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-673300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 19:32:44.218006    8848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (854.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.470412164.exe start -p stopped-upgrade-608600 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.470412164.exe start -p stopped-upgrade-608600 --memory=2200 --vm-driver=hyperv: (7m10.2454971s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.470412164.exe -p stopped-upgrade-608600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.470412164.exe -p stopped-upgrade-608600 stop: (37.6653766s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-608600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-608600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m26.6249507s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (854.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-608600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-608600: (9.4628557s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.46s)

                                                
                                    

Test skip (32/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-804300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-804300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 2348: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-804300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0521361s)

                                                
                                                
-- stdout --
	* [functional-804300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:36:53.563694   13864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 17:36:53.565391   13864 out.go:291] Setting OutFile to fd 928 ...
	I0716 17:36:53.565729   13864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:36:53.565729   13864 out.go:304] Setting ErrFile to fd 944...
	I0716 17:36:53.565729   13864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:36:53.606088   13864 out.go:298] Setting JSON to false
	I0716 17:36:53.611093   13864 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18252,"bootTime":1721158360,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:36:53.611093   13864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:36:53.617161   13864 out.go:177] * [functional-804300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:36:53.622309   13864 notify.go:220] Checking for updates...
	I0716 17:36:53.625519   13864 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:36:53.628891   13864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:36:53.632061   13864 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:36:53.634374   13864 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:36:53.638021   13864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:36:53.641987   13864 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:36:53.644093   13864 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-804300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0544505s)

                                                
                                                
-- stdout --
	* [functional-804300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0716 17:36:48.480515    2696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0716 17:36:48.481505    2696 out.go:291] Setting OutFile to fd 252 ...
	I0716 17:36:48.482500    2696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:36:48.482500    2696 out.go:304] Setting ErrFile to fd 808...
	I0716 17:36:48.482500    2696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0716 17:36:48.503492    2696 out.go:298] Setting JSON to false
	I0716 17:36:48.508503    2696 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18247,"bootTime":1721158360,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0716 17:36:48.509492    2696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0716 17:36:48.511514    2696 out.go:177] * [functional-804300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0716 17:36:48.515625    2696 notify.go:220] Checking for updates...
	I0716 17:36:48.518518    2696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0716 17:36:48.521496    2696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0716 17:36:48.525500    2696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0716 17:36:48.528499    2696 out.go:177]   - MINIKUBE_LOCATION=19265
	I0716 17:36:48.530508    2696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0716 17:36:48.535505    2696 config.go:182] Loaded profile config "functional-804300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0716 17:36:48.536501    2696 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard